978 resultados para genetics, statistical genetics, variable models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable evidence of trends in the illegal ivory trade is important for informing decision making for elephants but it is difficult to obtain due to the covert nature of the trade. The Elephant Trade Information System, a global database of reported seizures of illegal ivory, holds the only extensive information on illicit trade available. However inherent biases in seizure data make it difficult to infer trends; countries differ in their ability to make and report seizures and these differences cannot be directly measured. We developed a new modelling framework to provide quantitative evidence on trends in the illegal ivory trade from seizures data. The framework used Bayesian hierarchical latent variable models to reduce bias in seizures data by identifying proxy variables that describe the variability in seizure and reporting rates between countries and over time. Models produced bias-adjusted smoothed estimates of relative trends in illegal ivory activity for raw and worked ivory in three weight classes. Activity is represented by two indicators describing the number of illegal ivory transactions--Transactions Index--and the total weight of illegal ivory transactions--Weights Index--at global, regional or national levels. Globally, activity was found to be rapidly increasing and at its highest level for 16 years, more than doubling from 2007 to 2011 and tripling from 1998 to 2011. Over 70% of the Transactions Index is from shipments of worked ivory weighing less than 10 kg and the rapid increase since 2007 is mainly due to increased consumption in China. Over 70% of the Weights Index is from shipments of raw ivory weighing at least 100 kg mainly moving from Central and East Africa to Southeast and East Asia. The results tie together recent findings on trends in poaching rates, declining populations and consumption and provide detailed evidence to inform international decision making on elephants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present different ofrailtyo models to analyze longitudinal data in the presence of covariates. These models incorporate the extra-Poisson variability and the possible correlation among the repeated counting data for each individual. Assuming a CD4 counting data set in HIV-infected patients, we develop a hierarchical Bayesian analysis considering the different proposed models and using Markov Chain Monte Carlo methods. We also discuss some Bayesian discrimination aspects for the choice of the best model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A correlation between the physicochemical properties of mono- [Li(I), K(I), Na(I)] and divalent [Cd(II), Cu(II), Mn(II), Ni(II), Co(II), Zn(II), Mg(II), Ca(II)] metal cations and their toxicity (evaluated by the free ion median effective concentration. EC50(F)) to the naturally bioluminescent fungus Gerronema viridilucens has been studied using the quantitative ion character activity relationship (QICAR) approach. Among the 11 ionic parameters used in the current study, a univariate model based on the covalent index (X(m)(2)r) proved to be the most adequate for prediction of fungal metal toxicity evaluated by the logarithm of free ion median effective concentration (log EC50(F)): log EC50(F) = 4.243 (+/-0.243) -1.268 (+/-0.125).X(m)(2)r (adj-R(2) = 0.9113, Alkaike information criterion [AIC] = 60.42). Additional two- and three-variable models were also tested and proved less suitable to fit the experimental data. These results indicate that covalent bonding is a good indicator of metal inherent toxicity to bioluminescent fungi. Furthermore, the toxicity of additional metal ions [Ag(I), Cs(I), Sr(II), Ba(II), Fe(II), Hg(II), and Pb(II)] to G. viridilucens was predicted, and Pb was found to be the most toxic metal to this bioluminescent fungus (EC50(F)): Pb(II) > Ag(I) > Hg(I) > Cd(II) > Cu(II) > Co(II) Ni(II) > Mn(II) > Fe(II) approximate to Zn(II) > Mg(II) approximate to Ba(II) approximate to Cs(I) > Li(I) > K(I) approximate to Na(I) approximate to Sr(II)> Ca(II). Environ. Toxicol. Chem. 2010;29:2177-2181. (C) 2010 SETAC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the present work is searching for the correlation between the carbon content in steels and the parameters of the rheological models, which are used to describe the materials behavior during hot plastic deformation. This correlation can be expected in the internal variable models, which are based on physical phenomena occurring in the material. Such a model, based on the dislocation density as the internal variable, is investigated in this work. The experiments including hot torsion tests are used for the analysis.
The procedure is composed of three parts. Plastometric tests were performed for steels with various carbon content. Optimization techniques were applied next to determine the coefficients in the internal variable rheological model for these steels. Two versions of the model are considered. One is based on the average dislocation density and the second accounts for the distribution of dislocation densities. Evaluation of correlation between carbon content and such coefficients in the models as activation energy for self diffusion, activation energy for recrystallization, grain boundary mobility, recovery coefficient etc. was the main objective of the work. In consequence, the model which may be used for simulation of hot forming processes for steels with various chemical compositions, is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Muscle strengthening exercises are promoted for building and maintaining a healthy skeleton. We aimed to investigate the relationship between muscle strength and areal bone mineral density (BMD) at the hip in women aged 26-97 years.

METHODS: This cross-sectional study utilises data from 863 women assessed for the Geelong Osteoporosis Study. Measures of hip flexor and abductor strength were made using a hand-held dynamometer (Nicholas Manual Muscle Tester). The maximal measure from three trials on each leg was used for analyses. BMD was measured at the hip using dual energy x-ray absorptiometry (DXA; Lunar DPX-L). Total lean mass, body fat mass and appendicular lean mass were determined from whole body DXA scans. Linear regression techniques were used with muscle strength as the independent variable and BMD as the dependent variable. Models were adjusted for age and indices of body composition.

RESULTS: Measures of age-adjusted hip flexor strength and hip abductor strength were positively associated with total hip BMD. For each standard deviation (SD) increase in hip flexor strength, the increase in mean total hip BMD (SD) was 10.4 % (p = 0.009). A similar pattern was observed for hip abductor strength, with an increase in mean total hip BMD of 22.8 % (p = 0.025). All associations between hip muscle strength and total hip BMD were independent of height, but were nullified after adjusting for appendicular lean mass or total lean mass.

CONCLUSIONS: There was a positive association observed between muscle strength and BMD at the hip. However, this association was explained by measures of lean mass.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Restricted Boltzmann Machines (RBMs) are an important class of latent variable models for representing vector data. An under-explored area is multimode data, where each data point is a matrix or a tensor. Standard RBMs applying to such data would require vectorizing matrices and tensors, thus resulting in unnecessarily high dimensionality and at the same time, destroying the inherent higher-order interaction structures. This paper introduces Tensor-variate Restricted Boltzmann Machines (TvRBMs) which generalize RBMs to capture the multiplicative interaction between data modes and the latent variables. TvRBMs are highly compact in that the number of free parameters grows only linear with the number of modes. We demonstrate the capacity of TvRBMs on three real-world applications: handwritten digit classification, face recognition and EEG-based alcoholic diagnosis. The learnt features of the model are more discriminative than the rivals, resulting in better classification performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese é composta por três artigos. Dois deles investigam assuntos afeitos a tributação e o terceiro é um artigo sobre o tema “poupança”'. Embora os objetos de análise sejam distintos, os três possuem como característica comum a aplicação de técnicas de econometria de dados em painel a bases de dados inéditas. Em dois dos artigos, utiliza-se estimação por GMM em modelos dinâmicos. Por sua vez, o artigo remanescente é uma aplicação de modelos de variável dependente latente. Abaixo, apresenta-se um breve resumo de cada artigo, começando pelos dois artigos de tributação, que dividem uma seção comum sobre o ICMS (o imposto estadual sobre valor adicionado) e terminando com o artigo sobre poupança. O primeiro artigo analisa a importância da fiscalização como instrumento para deter a evasão de tributos e aumentar a receita tributária, no caso de um imposto sobre valor adicionado, no contexto de um país em desenvolvimento. O estudo é realizado com dados do estado de São Paulo. Para tratar questões relativas a endogeneidade e inércia na série de receita tributária, empregam-se técnicas de painel dinâmico. Utiliza-se como variáveis de controle o nível do PIB regional e duas proxies para esforço fiscal: a quantidade e o valor das multas tributárias. Os resultados apontam impacto significativo do esforço fiscal nas receitas tributárias. O artigo evidencia, indiretamente, a forma como a evasão fiscal é afetada pela penalidade aplicada aos casos de sonegação. Suas conclusões também são relevantes no contexto das discussões sobre o federalismo fiscal brasileiro, especialmente no caso de uma reforma tributária potencial. O segundo artigo examina uma das principais tarefas das administrações tributárias: a escolha periódica de contribuintes para auditoria. A melhora na eficiência dos mecanismos de seleção de empresas tem o potencial de impactar positivamente a probabilidade de detecção de fraudes fiscais, provendo melhor alocação dos escassos recursos fiscais. Neste artigo, tentamos desenvolver este mecanismo calculando a probabilidade de sonegação associada a cada contribuinte. Isto é feito, no universo restrito de empresas auditadas, por meio da combinação “ótima” de diversos indicadores fiscais existentes e de informações dos resultados dos procedimentos de auditoria, em modelos de variável dependente latente. Após calculados os coeficientes, a probabilidade de sonegação é calculada para todo o universo de contribuintes. O método foi empregado em um painel com micro-dados de empresas sujeitas ao recolhimento de ICMS no âmbito da Delegacia Tributária de Guarulhos, no estado de São Paulo. O terceiro artigo analisa as baixas taxas de poupança dos países latino-americanos nas últimas décadas. Utilizando técnicas de dados em painel, identificam-se os determinantes da taxa de poupança. Em seguida, faz-se uma análise contrafactual usando a China, que tem apresentado altas taxas de poupança no mesmo período, como parâmetro. Atenção especial é dispensada ao Brasil, que tem ficado muito atrás dos seus pares no grupo dos BRICs neste quesito. O artigo contribui para a literatura existente em vários sentidos: emprega duas amplas bases de dados para analisar a influência de uma grande variedade de determinantes da taxa de poupança, incluindo variáveis demográficas e de previdência social; confirma resultados previamente encontrados na literatura, com a robustez conferida por bases de dados mais ricas; para alguns países latino-americanos, revela que as suas taxas de poupança tenderiam a aumentar se eles tivessem um comportamento mais semelhante ao da China em outras áreas, mas o incremento não seria tão dramático.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FUNDAMENTO: A prevalência de dislipidemias vem aumentando em diversas partes do Brasil, porém não é claro ainda quanto de exercício físico é necessário para se obter efeitos benéficos sobre os níveis de lipoproteínas plasmáticas. OBJETIVO: O estudo analisou, em oito cidades do Estado de São Paulo, a associação entre a prática continuada de exercícios físicos ao longo da vida e a ocorrência de dislipidemia na idade adulta. MÉTODOS: Estudo transversal envolvendo 2.720 adultos, de ambos os sexos, residentes em oito cidades do Estado de São Paulo. Por meio de entrevista domiciliar, a presença de dislipidemias foi autorreferida e a prática de exercícios físicos foi analisada na infância (7-10 anos), na adolescência (11-17 anos) e na idade adulta (atividades de lazer). No tratamento estatístico, modelos multivariados foram criados com a regressão logística binária. RESULTADOS: A prevalência de dislipidemia foi de 12,2% (IC95%= 11,1%-13,5%) e não houve diferença entre as cidades (p = 0,443). Mulheres (p = 0,001) e obesos (p = 0,001) apresentaram maior taxa de dislipidemia. Exercício físico atual não se associou com a presença de dislipidemia ([> 180 minutos por semana] p = 0,165), porém, a prática de exercício físico, tanto na infância (p = 0,001) como na adolescência (p = 0,001), foi associada com menor ocorrência da doença. Adultos fisicamente ativos em todos os três momentos da vida apresentaram 65% menos chances de reportar dislipidemia (RC = 0,35 [0,15-0,78]). CONCLUSÃO: A prática continuada de exercícios físicos ao longo da vida foi associada com menor ocorrência de dislipidemia entre adultos do Estado de São Paulo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Box-Cox transformation is a technique mostly utilized to turn the probabilistic distribution of a time series data into approximately normal. And this helps statistical and neural models to perform more accurate forecastings. However, it introduces a bias when the reversion of the transformation is conducted with the predicted data. The statistical methods to perform a bias-free reversion require, necessarily, the assumption of Gaussianity of the transformed data distribution, which is a rare event in real-world time series. So, the aim of this study was to provide an effective method of removing the bias when the reversion of the Box-Cox transformation is executed. Thus, the developed method is based on a focused time lagged feedforward neural network, which does not require any assumption about the transformed data distribution. Therefore, to evaluate the performance of the proposed method, numerical simulations were conducted and the Mean Absolute Percentage Error, the Theil Inequality Index and the Signal-to-Noise ratio of 20-step-ahead forecasts of 40 time series were compared, and the results obtained indicate that the proposed reversion method is valid and justifies new studies. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction methods of information lean on in variable models in the contemporary, especially by the diversity of languages and platforms offered. Among the innovations we find a growth of the gamification effect in the representation or reinforcement of the news, seizing the moment entertainment to increase user engagement. This research presents, from a case study published by The New York Times on the information complement referring to the World Cup Brazil 2014, studies of the importance of the interface in the broadcasting of informative contents, especially in a society where the tactile sensation is growing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to Bell's theorem a large class of hidden-variable models obeying Bell's notion of local causality (LC) conflict with the predictions of quantum mechanics. Recently, a Bell-type theorem has been proven using a weaker notion of LC, yet assuming the existence of perfectly correlated event types. Here we present a similar Bell-type theorem without this latter assumption. The derived inequality differs from the Clauser-Horne inequality by some small correction terms, which render it less constraining.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Characterizing the spatial scaling and dynamics of convective precipitation in mountainous terrain and the development of downscaling methods to transfer precipitation fields from one scale to another is the overall motivation for this research. Substantial progress has been made on characterizing the space-time organization of Midwestern convective systems and tropical rainfall, which has led to the development of statistical/dynamical downscaling models. Space-time analysis and downscaling of orographic precipitation has received less attention due to the complexities of topographic influences. This study uses multiscale statistical analysis to investigate the spatial scaling of organized thunderstorms that produce heavy rainfall and flooding in mountainous regions. Focus is placed on the eastern and western slopes of the Appalachian region and the Front Range of the Rocky Mountains. Parameter estimates are analyzed over time and attention is given to linking changes in the multiscale parameters with meteorological forcings and orographic influences on the rainfall. Influences of geographic regions and predominant orographic controls on trends in multiscale properties of precipitation are investigated. Spatial resolutions from 1 km to 50 km are considered. This range of spatial scales is needed to bridge typical scale gaps between distributed hydrologic models and numerical weather prediction (NWP) forecasts and attempts to address the open research problem of scaling organized thunderstorms and convection in mountainous terrain down to 1-4 km scales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The factorial validity of the SF-36 was evaluated using confirmatory factor analysis (CFA) methods, structural equation modeling (SEM), and multigroup structural equation modeling (MSEM). First, the measurement and structural model of the hypothesized SF-36 was explicated. Second, the model was tested for the validity of a second-order factorial structure, upon evidence of model misfit, determined the best-fitting model, and tested the validity of the best-fitting model on a second random sample from the same population. Third, the best-fitting model was tested for invariance of the factorial structure across race, age, and educational subgroups using MSEM.^ The findings support the second-order factorial structure of the SF-36 as proposed by Ware and Sherbourne (1992). However, the results suggest that: (a) Mental Health and Physical Health covary; (b) general mental health cross-loads onto Physical Health; (c) general health perception loads onto Mental Health instead of Physical Health; (d) many of the error terms are correlated; and (e) the physical function scale is not reliable across these two samples. This hierarchical factor pattern was replicated across both samples of health care workers, suggesting that the post hoc model fitting was not data specific. Subgroup analysis suggests that the physical function scale is not reliable across the "age" or "education" subgroups and that the general mental health scale path from Mental Health is not reliable across the "white/nonwhite" or "education" subgroups.^ The importance of this study is in the use of SEM and MSEM in evaluating sample data from the use of the SF-36. These methods are uniquely suited to the analysis of latent variable structures and are widely used in other fields. The use of latent variable models for self reported outcome measures has become widespread, and should now be applied to medical outcomes research. Invariance testing is superior to mean scores or summary scores when evaluating differences between groups. From a practical, as well as, psychometric perspective, it seems imperative that construct validity research related to the SF-36 establish whether this same hierarchical structure and invariance holds for other populations.^ This project is presented as three articles to be submitted for publication. ^