938 resultados para Linear Attention,Conditional Language Model,Natural Language Generation,FLAX,Rare diseases
Resumo:
It is well accepted that tumorigenesis is a multi-step procedure involving aberrant functioning of genes regulating cell proliferation, differentiation, apoptosis, genome stability, angiogenesis and motility. To obtain a full understanding of tumorigenesis, it is necessary to collect information on all aspects of cell activity. Recent advances in high throughput technologies allow biologists to generate massive amounts of data, more than might have been imagined decades ago. These advances have made it possible to launch comprehensive projects such as (TCGA) and (ICGC) which systematically characterize the molecular fingerprints of cancer cells using gene expression, methylation, copy number, microRNA and SNP microarrays as well as next generation sequencing assays interrogating somatic mutation, insertion, deletion, translocation and structural rearrangements. Given the massive amount of data, a major challenge is to integrate information from multiple sources and formulate testable hypotheses. This thesis focuses on developing methodologies for integrative analyses of genomic assays profiled on the same set of samples. We have developed several novel methods for integrative biomarker identification and cancer classification. We introduce a regression-based approach to identify biomarkers predictive to therapy response or survival by integrating multiple assays including gene expression, methylation and copy number data through penalized regression. To identify key cancer-specific genes accounting for multiple mechanisms of regulation, we have developed the integIRTy software that provides robust and reliable inferences about gene alteration by automatically adjusting for sample heterogeneity as well as technical artifacts using Item Response Theory. To cope with the increasing need for accurate cancer diagnosis and individualized therapy, we have developed a robust and powerful algorithm called SIBER to systematically identify bimodally expressed genes using next generation RNAseq data. We have shown that prediction models built from these bimodal genes have the same accuracy as models built from all genes. Further, prediction models with dichotomized gene expression measurements based on their bimodal shapes still perform well. The effectiveness of outcome prediction using discretized signals paves the road for more accurate and interpretable cancer classification by integrating signals from multiple sources.
Resumo:
Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^
Resumo:
Radiocarbon and 230Thexcess data from six NE Atlantic box cores are considered. The cores form a transect from the Porcupine Abyssal Plain over the East Thulean Rise to the southern end of Feni Drift. The chronology for the cores is established from bulk sediment carbonate radiocarbon data and reveals that sections exhibiting constant accumulation rates can be identified in all the cores, with rates of 3.0-3.5 cm/kyr on the plain through the Holocene and late Holocene rates of 4.3-6.6 cm/kyr elsewhere. Five out of the six cores show accumulations of more 230Thexcess than is produced in the overlying water column, with the greatest inventories (up to 225% of production) in the cores from the rise and drift. A size fraction comparison between two cores from the plain and rise reveals that the higher overall accumulation rates and 230Thexcess inventories in the off-plain cores are due to an increased fine (<5 µm) component fraction, whereas the flux of coarser material is similar to that received on the plain. This suggests that the higher fluxes of materials observed are physically (rather than biogeochemically) driven and also that drift formation has been continuously active in the late Holocene. Sections of all the cores where regular accumulation is defined by the radiocarbon data are modeled first by a linear radiocarbon age/depth model and second by a constant rain (230Thexcess)0 model prorated for the observed core inventories. These modeling approaches yield historical mass accumulation rate estimates which are generally in reasonable agreement (±30%), but the differences observed appear to be well organized in time rather than random.
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.
Resumo:
It is well known that that there is an intrinsic link between the financial and energy sectors, which can be analyzed through their spillover effects, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility in both spot and futures markets. Financial derivatives, which are not only highly representative of the underlying indices but can also be traded on both the spot and futures markets, include Exchange Traded Funds (ETFs), which is a tradable spot index whose aim is to replicate the return of an underlying benchmark index. When ETF futures are not available to examine spillover effects, “generated regressors” may be used to construct both Financial ETF futures and Energy ETF futures. The purpose of the paper is to investigate the covolatility spillovers within and across the US energy and financial sectors in both spot and futures markets, by using “generated regressors” and a multivariate conditional volatility model, namely Diagonal BEKK. The daily data used are from 1998/12/23 to 2016/4/22. The data set is analyzed in its entirety, and also subdivided into three subset time periods. The empirical results show there is a significant relationship between the Financial ETF and Energy ETF in the spot and futures markets. Therefore, financial and energy ETFs are suitable for constructing a financial portfolio from an optimal risk management perspective, and also for dynamic hedging purposes.
Resumo:
We show that quantum computation circuits using coherent states as the logical qubits can be constructed from simple linear networks, conditional photon measurements, and "small" coherent superposition resource states.
Resumo:
Dyslexia and attentional difficulty have often been linked, but little is known of the nature of the supposed attentional disorder. The Sustained Attention to Response Task (SART: Robertson, Manly, Andrade, Baddeley and Yiend, 1997) was designed as a measure of sustained attention and requires the withholding of responses to rare (one in nine) targets. To investigate the nature of the attentional disorder in dyslexia, this paper reports two studies which examined the performance of teenagers with dyslexia and their age-matched controls on the SART, the squiggle SART (a modification of the SART using novel and unlabellable stimuli rather than digits) and the go-gap-stop test of response inhibition (GGST). Teenagers with dyslexia made significantly more errors than controls on the original SART, but not the squiggle SART. There were no group differences on the GGST. After controlling for speed of reaction time in a sequential multiple regression predicting SART false alarms, false alarms on the GGST accounted for up to 22% extra variance in the control groups (although less on the squiggle SART) but negligible amounts of variance in the dyslexic groups. We interpret the results as reflecting a stimulus recognition automaticity deficit in dyslexia, rather than a sustained attention deficit. Furthermore, results suggest that response inhibition is an important component of performance on the standard SART when stimuli are recognised automatically.
Resumo:
Behavioural studies on normal and brain-damaged individuals provide convincing evidence that the perception of objects results in the generation of both visual and motor signals in the brain, irrespective of whether or not there is an intention to act upon the object. In this paper we sought to determine the basis of the motor signals generated by visual objects. By examining how the properties of an object affect an observer's reaction time for judging its orientation, we provide evidence to indicate that directed visual attention is responsible for the automatic generation of motor signals associated with the spatial characteristics of perceived objects.
Resumo:
We present the prototype tool CADS* for the computer-aided development of an important class of self-* systems, namely systems whose components can be modelled as Markov chains. Given a Markov chain representation of the IT components to be included into a self-* system, CADS* automates or aids (a) the development of the artifacts necessary to build the self-* system; and (b) their integration into a fully-operational self-* solution. This is achieved through a combination of formal software development techniques including model transformation, model-driven code generation and dynamic software reconfiguration.
Resumo:
This paper investigates vertical economies between generation and distribution of electric power, and horizontal economies between different types of power generation in the U.S. electric utility industry. Our quadratic cost function model includes three generation output measures (hydro, nuclear and fossil fuels), which allows us to analyze the effect that generation mix has on vertical economies. Our results provide (sample mean) estimates of vertical economies of 8.1% and horizontal economies of 5.4%. An extensive sensitivity analysis is used to show how the scope measures vary across alternative model specifications and firm types. © 2012 Blackwell Publishing Ltd and the Editorial Board of The Journal of Industrial Economics.
Resumo:
This paper extends the smooth transition conditional correlation model by studying for the first time the impact that illiquidity shocks have on stock market return comovement. We show that firms that experience shocks that increase illiquidity are less liquid than firms that experience shocks that decrease illiquidity. Shocks that increase illiquidity have no statistical impact on comovement. However, shocks that reduce illiquidity lead to a fall in comovement, a pattern that becomes stronger as the illiquidity of the firm increases. This discovery is consistent with increased transparency and an improvement in price efficiency. We find that a small number of firms experience a double illiquidity shock. For these firms, at the first shock, a rise in illiquidity reduces comovement while a fall in illiquidity raises comovement. The second shock partly reverses these changes as a rise in illiquidity is associated with a rise in comovement and a fall in illiquidity is associated with a fall in comovement. These results have important implications for portfolio construction and also for the measurement and evolution of market beta and the cost of capital as it suggests that investors can achieve higher returns for the same amount of market risk because of the greater diversification benefits that exist. We also find that illiquidity, friction, firm size and the pre-shock correlation are all associated with the magnitude of the correlation change. © 2013 Elsevier B.V.
Resumo:
2000 Mathematics Subject Classification: 68T50.
Resumo:
Objective: The objective of the study is to explore preferences of gastroenterologists for biosimilar drugs in Crohn’s Disease and reveal trade-offs between the perceived risks and benefits related to biosimilar drugs. Method: Discrete choice experiment was carried out involving 51 Hungarian gastroenterologists in May, 2014. The following attributes were used to describe hypothetical choice sets: 1) type of the treatment (biosimilar/originator) 2) severity of disease 3) availability of continuous medicine supply 4) frequency of the efficacy check-ups. Multinomial logit model was used to differentiate between three attitude types: 1) always opting for the originator 2) willing to consider biosimilar for biological-naïve patients only 3) willing to consider biosimilar treatment for both types of patients. Conditional logit model was used to estimate the probabilities of choosing a given profile. Results: Men, senior consultants, working in IBD center and treating more patients are more likely to willing to consider biosimilar for biological-naïve patients only. Treatment type (originator/biosimilar) was the most important determinant of choice for patients already treated with biologicals, and the availability of continuous medicine supply in the case biological-naïve patients. The probabilities of choosing the biosimilar with all the benefits offered over the originator under current reimbursement conditions are 89% vs 11% for new patients, and 44% vs 56% for patients already treated with biological. Conclusions: Gastroenterologists were willing to trade between perceived risks and benefits of biosimilars. The continuous medical supply would be one of the major benefits of biosimilars. However, benefits offered in the scenarios do not compensate for the change from the originator to the biosimilar treatment of patients already treated with biologicals.
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
Back-reef seascapes represent critical habitat for juvenile and adult fishes. Patch reef, seagrass, and mangrove habitats form a heterogeneous mosaic, often linked by species that use reefs as structure during the day and make foraging migrations into soft-bottom habitat at night. Artificial reefs are used to model natural patch reefs, however may not function equivalently as fish habitat. To study the relative value of natural and artificial patch reefs as fish habitat, these communities in the Sea of Abaco, Bahamas were compared using roving diver surveys and time-lapse photography. Diel turnover in fish abundance, recorded with time-lapse photography and illuminated by infrared light, was quantified across midday, dusk, and night periods to explore possible effects of reef type (artificial vs. natural) on these patterns. Diurnal communities on natural reefs exhibited greater fish abundance, species richness, and functional diversity compared to artificial reefs. Furthermore, both types of reef communities exhibited a significant shift across the diel period, characterized by a decline in total fish density at night, especially for grunts (Haemulidae). Cross-habitat foraging migrations by diurnal or nocturnal species, such as haemulids, are likely central drivers of this twilight turnover and can represent important energy and nutrient subsidies. Time-lapse surveys provided more consistent measures of reef fish assemblages for the smaller artificial reef habitats, yet underestimated abundance of certain taxa and species richness on larger patch habitats when compared to the roving diver surveys. Time-lapse photography complemented with infrared light represent a valuable non-invasive approach to studying behavior of focal species and their fine-scale temporal dynamics in shallow-reef communities.