874 resultados para Filmic approach methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To investigate whether nonhemodynamic resonant saturation effects can be detected in patients with focal epilepsy by using a phase-cycled stimulus-induced rotary saturation (PC-SIRS) approach with spin-lock (SL) preparation and whether they colocalize with the seizure onset zone and surface interictal epileptiform discharges (IED). Materials and Methods The study was approved by the local ethics committee, and all subjects gave written informed consent. Eight patients with focal epilepsy undergoing presurgical surface and intracranial electroencephalography (EEG) underwent magnetic resonance (MR) imaging at 3 T with a whole-brain PC-SIRS imaging sequence with alternating SL-on and SL-off and two-dimensional echo-planar readout. The power of the SL radiofrequency pulse was set to 120 Hz to sensitize the sequence to high gamma oscillations present in epileptogenic tissue. Phase cycling was applied to capture distributed current orientations. Voxel-wise subtraction of SL-off from SL-on images enabled the separation of T2* effects from rotary saturation effects. The topography of PC-SIRS effects was compared with the seizure onset zone at intracranial EEG and with surface IED-related potentials. Bayesian statistics were used to test whether prior PC-SIRS information could improve IED source reconstruction. Results Nonhemodynamic resonant saturation effects ipsilateral to the seizure onset zone were detected in six of eight patients (concordance rate, 0.75; 95% confidence interval: 0.40, 0.94) by means of the PC-SIRS technique. They were concordant with IED surface negativity in seven of eight patients (0.88; 95% confidence interval: 0.51, 1.00). Including PC-SIRS as prior information improved the evidence of the standard EEG source models compared with the use of uninformed reconstructions (exceedance probability, 0.77 vs 0.12; Wilcoxon test of model evidence, P < .05). Nonhemodynamic resonant saturation effects resolved in patients with favorable postsurgical outcomes, but persisted in patients with postsurgical seizure recurrence. Conclusion Nonhemodynamic resonant saturation effects are detectable during interictal periods with the PC-SIRS approach in patients with epilepsy. The method may be useful for MR imaging-based detection of neuronal currents in a clinical environment. (©) RSNA, 2016 Online supplemental material is available for this article.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on an order-theoretic approach, we derive sufficient conditions for the existence, characterization, and computation of Markovian equilibrium decision processes and stationary Markov equilibrium on minimal state spaces for a large class of stochastic overlapping generations models. In contrast to all previous work, we consider reduced-form stochastic production technologies that allow for a broad set of equilibrium distortions such as public policy distortions, social security, monetary equilibrium, and production nonconvexities. Our order-based methods are constructive, and we provide monotone iterative algorithms for computing extremal stationary Markov equilibrium decision processes and equilibrium invariant distributions, while avoiding many of the problems associated with the existence of indeterminacies that have been well-documented in previous work. We provide important results for existence of Markov equilibria for the case where capital income is not increasing in the aggregate stock. Finally, we conclude with examples common in macroeconomics such as models with fiat money and social security. We also show how some of our results extend to settings with unbounded state spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. This study validated the content of an instrument designed to assess the performance of the medicolegal death investigation system. The instrument was modified from Version 2.0 of the Local Public Health System Performance Assessment Instrument (CDC) and is based on the 10 Essential Public Health Services. ^ Aims. The aims were to employ a cognitive testing process to interview a randomized sample of medicolegal death investigation office leaders, qualitatively describe the results, and revise the instrument accordingly. ^ Methods. A cognitive testing process was used to validate the survey instrument's content in terms of the how well participants could respond to and interpret the questions. Twelve randomly selected medicolegal death investigation chiefs (or equivalent) that represented the seven types of medicolegal death investigation systems and six different state mandates were interviewed by telephone. The respondents also were representative of the educational diversity within medicolegal death investigation leadership. Based on respondent comments, themes were identified that permitted improvement of the instrument toward collecting valid and reliable information when ultimately used in a field survey format. ^ Results. Responses were coded and classified, which permitted the identification of themes related to Comprehension/Interpretation, Retrieval, Estimate/Judgment, and Response. The majority of respondent comments related to Comprehension/Interpretation of the questions. Respondents identified 67 questions and 6 section explanations that merited rephrasing, adding, or deleting examples or words. In addition, five questions were added based on respondent comments. ^ Conclusion. The content of the instrument was validated by cognitive testing method design. The respondents agreed that the instrument would be a useful and relevant tool for assessing system performance. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MAX dimerization protein 1 (MAD1) is a basic-helix-loop-helix transcription factors that recruits transcription repressor such as HDAC to suppress target genes transcription. It antagonizes to MYC because the promoter binding sites for MYC are usually also serve as the binding sites for MAD1 so they compete for it. However, the mechanism of the switch between MYC and MAD1 in turning on and off of genes' transcription is obscure. In this study, we demonstrated that AKT-mediated MAD1 phosphorylation inhibits MAD1 transcription repression function. The association between MAD1 and its target genes' promoter is reduced after been phosphorylated by AKT; therefore, consequently, allows MYC to occupy the binding site and activates transcription. Mutation of such phosphorylation site abrogates the inhibition from AKT. In addition, functional assays demonstrated that AKT suppressed MAD1-mediated transcription repression of its target genes hTERT and ODC. Cell cycle and cell growth were also been released from inhibition by MAD1 in the presents of AKT. Taken together, our study suggests that MAD1 is a novel substrate of AKT and AKT-mediated MAD1 phosphorylation inhibits MAD1function; therefore, activates MAD1 target genes expression. ^ Furthermore, analysis of protein-protein interaction is indispensable for current molecular biology research, but multiplex protein dynamics in cells is too complicated to be analyzed by using existing biochemical methods. To overcome the disadvantage, we have developed a single molecule level detection system with nanofluidic chip. Single molecule was analyzed based on their fluorescent profile and their profiles were plotted into 2 dimensional time co-incident photon burst diagram (2DTP). From this 2DTP, protein complexes were characterized. These results demonstrate that the nanochannel protein detection system is a promising tool for future molecular biology. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Today modern day slavery is known as human trafficking and is a growing pandemic that is a grave human rights violation. Estimates suggest that 12.3 million people are working under conditions of force, fraud or coercion. Working toward eradication is a worthy effort; it would free millions of humans from slavery, mostly women and children, as well as uphold basic human rights. One tactic to eradicating human trafficking is to increase identification of victims among those likely to encounter victims of human trafficking.^ Purpose. This study aims to develop an intervention that improves certain stakeholders' ability, in the health clinic setting, to appropriately identify and report victims of human trafficking to the National Human Trafficking Resource Center.^ Methods. The Intervention Mapping (IM) process was used by program planners to develop an intervention for health professionals. This methodology is a six step process that guides program planners to develop an intervention. Each step builds on the others through the execution of a needs assessment, and the development of matrices based on performance objectives and determinants of the targeted health behavior. The end product results in an ecological, theoretical, and evidence based intervention.^ Discussion. The IM process served as a useful protocol for program planners to take an ecological approach as well as incorporate theory and evidence into the intervention. Consultation with key informants, the planning group, adopters, implementers, and individuals responsible for institutionalization also contributed to the practicality and feasibility of the intervention. Program planners believe that this intervention fully meets recommendations set forth in the literature.^ Conclusions. The intervention mapping methodology enabled program planners to develop an intervention that is appropriate and acceptable to the implementer and the recipients.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most studies of differential gene-expressions have been conducted between two given conditions. The two-condition experimental (TCE) approach is simple in that all genes detected display a common differential expression pattern responsive to a common two-condition difference. Therefore, the genes that are differentially expressed under the other conditions other than the given two conditions are undetectable with the TCE approach. In order to address the problem, we propose a new approach called multiple-condition experiment (MCE) without replication and develop corresponding statistical methods including inference of pairs of conditions for genes, new t-statistics, and a generalized multiple-testing method for any multiple-testing procedure via a control parameter C. We applied these statistical methods to analyze our real MCE data from breast cancer cell lines and found that 85 percent of gene-expression variations were caused by genotypic effects and genotype-ANAX1 overexpression interactions, which agrees well with our expected results. We also applied our methods to the adenoma dataset of Notterman et al. and identified 93 differentially expressed genes that could not be found in TCE. The MCE approach is a conceptual breakthrough in many aspects: (a) many conditions of interests can be conducted simultaneously; (b) study of association between differential expressions of genes and conditions becomes easy; (c) it can provide more precise information for molecular classification and diagnosis of tumors; (d) it can save lot of experimental resources and time for investigators.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical methods are developed which assess survival data for two attributes; (1) prolongation of life, (2) quality of life. Health state transition probabilities correspond to prolongation of life and are modeled as a discrete-time semi-Markov process. Imbedded within the sojourn time of a particular health state are the quality of life transitions. They reflect events which differentiate perceptions of pain and suffering over a fixed time period. Quality of life transition probabilities are derived from the assumptions of a simple Markov process. These probabilities depend on the health state currently occupied and the next health state to which a transition is made. Utilizing the two forms of attributes the model has the capability to estimate the distribution of expected quality adjusted life years (in addition to the distribution of expected survival times). The expected quality of life can also be estimated within the health state sojourn time making more flexible the assessment of utility preferences. The methods are demonstrated on a subset of follow-up data from the Beta Blocker Heart Attack Trial (BHAT). This model contains the structure necessary to make inferences when assessing a general survival problem with a two dimensional outcome. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for timely population data for health planning and Indicators of need has Increased the demand for population estimates. The data required to produce estimates is difficult to obtain and the process is time consuming. Estimation methods that require less effort and fewer data are needed. The structure preserving estimator (SPREE) is a promising technique not previously used to estimate county population characteristics. This study first uses traditional regression estimation techniques to produce estimates of county population totals. Then the structure preserving estimator, using the results produced in the first phase as constraints, is evaluated.^ Regression methods are among the most frequently used demographic methods for estimating populations. These methods use symptomatic indicators to predict population change. This research evaluates three regression methods to determine which will produce the best estimates based on the 1970 to 1980 indicators of population change. Strategies for stratifying data to improve the ability of the methods to predict change were tested. Difference-correlation using PMSA strata produced the equation which fit the data the best. Regression diagnostics were used to evaluate the residuals.^ The second phase of this study is to evaluate use of the structure preserving estimator in making estimates of population characteristics. The SPREE estimation approach uses existing data (the association structure) to establish the relationship between the variable of interest and the associated variable(s) at the county level. Marginals at the state level (the allocation structure) supply the current relationship between the variables. The full allocation structure model uses current estimates of county population totals to limit the magnitude of county estimates. The limited full allocation structure model has no constraints on county size. The 1970 county census age - gender population provides the association structure, the allocation structure is the 1980 state age - gender distribution.^ The full allocation model produces good estimates of the 1980 county age - gender populations. An unanticipated finding of this research is that the limited full allocation model produces estimates of county population totals that are superior to those produced by the regression methods. The full allocation model is used to produce estimates of 1986 county population characteristics. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Objective. Ever since the human development index was published in 1990 by the United Nations Development Programme (UNDP), many researchers started searching and corporative studying for more effective methods to measure the human development. Published in 1999, Lai’s “Temporal analysis of human development indicators: principal component approach” provided a valuable statistical way on human developmental analysis. This study presented in the thesis is the extension of Lai’s 1999 research. ^ Methods. I used the weighted principal component method on the human development indicators to measure and analyze the progress of human development in about 180 countries around the world from the year 1999 to 2010. The association of the main principal component obtained from the study and the human development index reported by the UNDP was estimated by the Spearman’s rank correlation coefficient. The main principal component was then further applied to quantify the temporal changes of the human development of selected countries by the proposed Z-test. ^ Results. The weighted means of all three human development indicators, health, knowledge, and standard of living, were increased from 1999 to 2010. The weighted standard deviation for GDP per capita was also increased across years indicated the rising inequality of standard of living among countries. The ranking of low development countries by the main principal component (MPC) is very similar to that by the human development index (HDI). Considerable discrepancy between MPC and HDI ranking was found among high development countries with high GDP per capita shifted to higher ranks. The Spearman’s rank correlation coefficient between the main principal component and the human development index were all around 0.99. All the above results were very close to outcomes in Lai’s 1999 report. The Z test result on temporal analysis of main principal components from 1999 to 2010 on Qatar was statistically significant, but not on other selected countries, such as Brazil, Russia, India, China, and U.S.A.^ Conclusion. To synthesize the multi-dimensional measurement of human development into a single index, the weighted principal component method provides a good model by using the statistical tool on a comprehensive ranking and measurement. Since the weighted main principle component index is more objective because of using population of nations as weight, more effective when the analysis is across time and space, and more flexible when the countries reported to the system has been changed year after year. Thus, in conclusion, the index generated by using weighted main principle component has some advantage over the human development index created in UNDP reports.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pancreatic cancer is the 4th most common cause for cancer death in the United States, accompanied by less than 5% five-year survival rate based on current treatments, particularly because it is usually detected at a late stage. Identifying a high-risk population to launch an effective preventive strategy and intervention to control this highly lethal disease is desperately needed. The genetic etiology of pancreatic cancer has not been well profiled. We hypothesized that unidentified genetic variants by previous genome-wide association study (GWAS) for pancreatic cancer, due to stringent statistical threshold or missing interaction analysis, may be unveiled using alternative approaches. To achieve this aim, we explored genetic susceptibility to pancreatic cancer in terms of marginal associations of pathway and genes, as well as their interactions with risk factors. We conducted pathway- and gene-based analysis using GWAS data from 3141 pancreatic cancer patients and 3367 controls with European ancestry. Using the gene set ridge regression in association studies (GRASS) method, we analyzed 197 pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. Using the logistic kernel machine (LKM) test, we analyzed 17906 genes defined by University of California Santa Cruz (UCSC) database. Using the likelihood ratio test (LRT) in a logistic regression model, we analyzed 177 pathways and 17906 genes for interactions with risk factors in 2028 pancreatic cancer patients and 2109 controls with European ancestry. After adjusting for multiple comparisons, six pathways were marginally associated with risk of pancreatic cancer ( P < 0.00025): Fc epsilon RI signaling, maturity onset diabetes of the young, neuroactive ligand-receptor interaction, long-term depression (Ps < 0.0002), and the olfactory transduction and vascular smooth muscle contraction pathways (P = 0.0002; Nine genes were marginally associated with pancreatic cancer risk (P < 2.62 × 10−5), including five reported genes (ABO, HNF1A, CLPTM1L, SHH and MYC), as well as four novel genes (OR13C4, OR 13C3, KCNA6 and HNF4 G); three pathways significantly interacted with risk factors on modifying the risk of pancreatic cancer (P < 2.82 × 10−4): chemokine signaling pathway with obesity ( P < 1.43 × 10−4), calcium signaling pathway (P < 2.27 × 10−4) and MAPK signaling pathway with diabetes (P < 2.77 × 10−4). However, none of the 17906 genes tested for interactions survived the multiple comparisons corrections. In summary, our current GWAS study unveiled unidentified genetic susceptibility to pancreatic cancer using alternative methods. These novel findings provide new perspectives on genetic susceptibility to and molecular mechanisms of pancreatic cancer, once confirmed, will shed promising light on the prevention and treatment of this disease. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physiognomic traits of plant leaves such as size, shape or margin are decisively affected by the prevailing environmental conditions of the plant habitat. On the other hand, if a relationship between environment and leaf physiognomy can be shown to exist, vegetation represents a proxy for environmental conditions. This study investigates the relationship between physiognomic traits of leaves from European hardwood vegetation and environmental parameters in order to create a calibration dataset based on high resolution grid cell data. The leaf data are obtained from synthetic chorologic floras, the environmental data comprise climatic and ecologic data. The high resolution of the data allows for a detailed analysis of the spatial dependencies between the investigated parameters. The comparison of environmental parameters and leaf physiognomic characters reveals a clear correlation between temperature related parameters (e.g. mean annual temperature or ground frost frequency) and the expression of leaf characters (e.g. the type of leaf margin or the base of the lamina). Precipitation related parameters (e.g. mean annual precipitation), however, show no correlation with the leaf physiognomic composition of the vegetation. On the basis of these results, transfer functions for several environmental parameters are calculated from the leaf physiognomic composition of the extant vegetation. In a next step, a cluster analysis is applied to the dataset in order to identify "leaf physiognomic communities". Several of these are distinguished, characterised and subsequently used for vegetation classification. Concerning the leaf physiognomic diversity there are precise differences between each of these "leaf physiognomic classes". There is a clear increase of leaf physiognomic diversity with increasing variability of the environmental parameters: Northern vegetation types are characterised by a more or less homogeneous leaf physiognomic composition whereas southern vegetation types like the Mediterranean vegetation show a considerable higher leaf physiognomic diversity. Finally, the transfer functions are used to estimate palaeo-environmental parameters of three fossil European leaf assemblages from Late Oligocene and Middle Miocene. The results are compared with results obtained from other palaeo-environmental reconstructing methods. The estimates based on a direct linear ordination seem to be the most realistic ones, as they are highly consistent with the Coexistence Approach.