981 resultados para Non-informative prior


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although tactile representations of the two body sides are initially segregated into opposite hemispheres of the brain, behavioural interactions between body sides exist and can be revealed under conditions of tactile double simultaneous stimulation (DSS) at the hands. Here we examined to what extent vision can affect body side segregation in touch. To this aim, we changed hand-related visual input while participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger = homologous; left middle finger = non-homologous). Across experiments, the two hands were visible or occluded from view (Experiment 1), images of the two hands were either merged using a morphing technique (Experiment 2), or were shown in a compatible vs incompatible position with respect to the actual posture (Experiment 3). Overall, the results showed reliable interference effects of DSS, as compared to target-only stimulation. This interference varied as a function of which non-target finger was stimulated, and emerged both within and between hands. These results imply that the competition between tactile events is not clearly segregated across body sides. Crucially, non-informative vision of the hand affected overall tactile performance only when a visual/proprioceptive conflict was present, while neither congruent nor morphed hand vision affected tactile DSS interference. This suggests that DSS operates at a tactile processing stage in which interactions between body sides can occur regardless of the available visual input from the body.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There are several versions of the lognormal distribution in the statistical literature, one is based in the exponential transformation of generalized normal distribution (GN). This paper presents the Bayesian analysis for the generalized lognormal distribution (logGN) considering independent non-informative Jeffreys distributions for the parameters as well as the procedure for implementing the Gibbs sampler to obtain the posterior distributions of parameters. The results are used to analyze failure time models with right-censored and uncensored data. The proposed method is illustrated using actual failure time data of computers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Studies on packaging accessibility are still incipient in Brazil. Many of these packagings can represent a challenge to users, whether due to non-informative labels, tricky tabs or seals, or even those that need strength to open. This paper brings a simple test to determine the necessary torque force to open PET bottles, and to predict the amount of users that could not open it. The findings suggest that a considerable amount of users could not open it or would have some difficulties to exert the necessary force.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os objetivos deste estudo foram estabelecer um protocolo para a análise de minissatélites ou VNTRs e microssatélites ou STRs em pacientes que se submeteram ao TMO alogênico; verificar a validade da metodologia e dos loci estudados e avaliar o tipo de recuperação do paciente. Foram analisados o DNA do paciente anterior e posterior ao transplante de 14 indivíduos e dos respectivos doadores. Amplificações por PCR de seis loci: D1S80, SE33, HumTH01, 33.6, HumARA e HumTPO foram realizadas. Os produtos amplificados foram separados por eletro­forese vertical em gel de poliacrilamida, e os fragmentos visualizados por coloração pela prata. Esse procedimento mostrou ser válido na verificação da recuperação alogênica, autóloga e provavelmente na quimérica. da somatória dos loci estudados, 63,1% apresentaram resultados possíveis de serem avaliados e, desses, 19,0% mostraram resultado informativo, 13,1% parcialmente informativo e 31,0% não informativo. Os 36,9% restantes não foram possíveis de avaliação. Dos loci avaliados, o que demostrou maior índice de resultado informativo foi o SE33, parcialmente informativo o HumTPO e não informativo o HumTH01, sendo o locus 33.6 o que mais apresentou resultados não possíveis de serem avaliados. Por outro lado, determinou-se a recuperação do paciente posterior ao transplante em 71,4% dos indivíduos, sendo que, desses, 90% apresentaram recuperação alogênica e 10% recuperação autóloga.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Programas de melhoramento são atividades que se desenvolvem durante anos e, por isso, devem ser flexíveis ao ajuste às novas situações criadas por mudanças nas tendências de mercado, na situação econômica e aquelas causadas por aumento do volume e qualidade dos dados e, também, por novas técnicas propostas pela comunidade científica. O ajuste a essas últimas deve ser feito, principalmente, por meio da substituição e escolha do modelo mais adequado para a descrição do fenômeno, em um determinado cenário. Os dados de ganho de peso médio diário, de um programa de melhoramento de suínos, envolvendo as raças Duroc, Landrace e Large White, foram analisados por meio da teoria bayesiana, por meio de dois modelos candidatos. Foram simulados três níveis de informação à priori: informativa, pouco informativa e não informativa. O comportamento das curvas das distribuições à posteriori e as respectivas estimativas associadas a cada nível de informação à priori foram analisadas e comparadas. Os resultados indicam que no modelo mais simples, as amostras das três raças são suficientes para produzir estimativas que não são alteradas pela informação à priori. Com relação ao mais parametrizado, as estimativas, para a raça Duroc, são alteradas pelo conhecimento prévio e, nesse caso, deve se buscar a melhor representação possível da distribuição à priori para obtenção de estimativas que são mais adequadas, dado o estado de conhecimento atual do melhorista.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Matematica Aplicada e Computacional - FCT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results". The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Background Transcript enumeration methods such as SAGE, MPSS, and sequencing-by-synthesis EST "digital northern", are important high-throughput techniques for digital gene expression measurement. As other counting or voting processes, these measurements constitute compositional data exhibiting properties particular to the simplex space where the summation of the components is constrained. These properties are not present on regular Euclidean spaces, on which hybridization-based microarray data is often modeled. Therefore, pattern recognition methods commonly used for microarray data analysis may be non-informative for the data generated by transcript enumeration techniques since they ignore certain fundamental properties of this space. Results Here we present a software tool, Simcluster, designed to perform clustering analysis for data on the simplex space. We present Simcluster as a stand-alone command-line C package and as a user-friendly on-line tool. Both versions are available at: http://xerad.systemsbiology.net/simcluster. Conclusion Simcluster is designed in accordance with a well-established mathematical framework for compositional data analysis, which provides principled procedures for dealing with the simplex space, and is thus applicable in a number of contexts, including enumeration-based gene expression data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most studies of exogenous visuospatial attention use placeholders indicating the regions where the stimuli appear on the screen. Preliminary results from our laboratory provided evidence that the attentional effect is more frequently observed when placeholders are used in these experimental procedures. Four experiments were carried out. Experiment 1 aimed at confirming the finding that the attentional effect of a spatially non-informative cue (S1) observed in the presence of placeholders disappears in their absence. The results confirmed this finding. Experiments 2, 3, and 4 examined several possible processes that could explain this finding. Experiment 2 investigated if the contribution of a faster disengagement of attention from the cued location or a stronger forward masking could explain the absence of attentional effect when no placeholders were used. Experiment 3 investigated if increased difficulty in discrimination of the target (S2) from S1 would favor the appearance of the attentional effect in the absence of placeholders. Experiment 4 investigated if an insufficient focusing of attention towards the cued location could explain the absence of attentional effect when no placeholders were used. The results of the three experiments indicated that placeholders act by reducing the discriminability of the S2. This would presumably lead to the adoption of an attentional set that favors the mobilization of attention by the S1

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Part One, the foundations of Bayesian inference are reviewed, and the technicalities of the Bayesian method are illustrated. Part Two applies the Bayesian meta-analysis program, the Confidence Profile Method (CPM), to clinical trial data and evaluates the merits of using Bayesian meta-analysis for overviews of clinical trials.^ The Bayesian method of meta-analysis produced similar results to the classical results because of the large sample size, along with the input of a non-preferential prior probability distribution. These results were anticipated through explanations in Part One of the mechanics of the Bayesian approach. ^