453 resultados para Iterative Methods
Resumo:
The numerical solution of fractional partial differential equations poses significant computational challenges in regard to efficiency as a result of the spatial nonlocality of the fractional differential operators. The dense coefficient matrices that arise from spatial discretisation of these operators mean that even one-dimensional problems can be difficult to solve using standard methods on grids comprising thousands of nodes or more. In this work we address this issue of efficiency for one-dimensional, nonlinear space-fractional reaction–diffusion equations with fractional Laplacian operators. We apply variable-order, variable-stepsize backward differentiation formulas in a Jacobian-free Newton–Krylov framework to advance the solution in time. A key advantage of this approach is the elimination of any requirement to form the dense matrix representation of the fractional Laplacian operator. We show how a banded approximation to this matrix, which can be formed and factorised efficiently, can be used as part of an effective preconditioner that accelerates convergence of the Krylov subspace iterative solver. Our approach also captures the full contribution from the nonlinear reaction term in the preconditioner, which is crucial for problems that exhibit stiff reactions. Numerical examples are presented to illustrate the overall effectiveness of the solver.
Resumo:
Due to the existing of many prestressed members in the structural system, the interdependent behavior of all prestressed members is the main concern in the analysis of the pretension process. A thorough investigation of this mutual effect is essential for an effective, reliable, and optimal analysis. Focus on this aspect, this paper presents an investigation of the interdependent behavior of all prestressed members in the whole structural system based on influence matrix (IFM). Four different types of IFM are introduced. Two different solving methods are brought forth to analyze the pretension process. The direct solving method solves for the accurate solution, whereas the iterative solving method repeatedly amends to achieve an approximate solution. A numerical example is then conducted. The result shows that various kinds of complicated batched and repeated tensioning schemes can be analyzed reliably, effectively, and completely based on IFM.
Resumo:
Twitter and other social networking sites play an ever more present role in the spread of current events. The dynamics of information dissemination through digital network structures are still relatively unexplored, however. At what time an issue is taken up by whom? Who forwards a message when to whom else? What role do individual communication participants, existing digital communities or the technical foundations of each network platform play in the spread of news? In this chapter we discuss, using the example of a video on a current sociopolitical issue in Australia that was shared on Twitter, a number of new methods for the dynamic visualisation and analysis of communication processes. Our method combines temporal and spatial analytical approaches and provides new insights into the spread of news in digital networks. [Social media dienen immer häufger als Disseminationsmechanismen für Medieninhalte. Auf Twitter ermöglicht besonders die Retweet-Funktion den schnellen und weitläufgen Transfer von Nachrichten. In diesem Beitrag etablieren neue methodische Ansätze zur Erfassung, Visualisierung und Analyse von Retweet-Ketten. Insbesondere heben wir hervor, wie bestehende Netzwerkanalysemethoden ergänzt werden können, um den Ablauf der Weiterleitung sowohl temporal als auch spatial zu erfassen. Unsere Fallstudie demonstriert die verbreitung des videoclips einer am 9. Oktober 2012 spontan gehaltenen Wutrede der australischen Premierministerin Julia Gillard, in der sie Oppositionsführer Tony Abbott als Frauenhasser brandmarkte. Durch die Erfassung von Hintergrunddaten zu den jeweiligen NutzerInnen, die sich an der Weiterleitung des Videoclips beteiligten, erstellen wir ein detailliertes Bild des Disseminationsablaufs im vorliegenden Fall. So lassen sich die wichtigsten AkteurInnen und der Ablauf der Weiterleitung darstellen und analysieren. Daraus entstehen Einblicke in die allgemeinen verbreitungsmuster von Nachrichten auf Twitter].
Resumo:
In this paper, we introduce the Stochastic Adams-Bashforth (SAB) and Stochastic Adams-Moulton (SAM) methods as an extension of the tau-leaping framework to past information. Using the theta-trapezoidal tau-leap method of weak order two as a starting procedure, we show that the k-step SAB method with k >= 3 is order three in the mean and correlation, while a predictor-corrector implementation of the SAM method is weak order three in the mean but only order one in the correlation. These convergence results have been derived analytically for linear problems and successfully tested numerically for both linear and non-linear systems. A series of additional examples have been implemented in order to demonstrate the efficacy of this approach.
Resumo:
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Resumo:
Disclosed are methods for detecting the presence of a carcinoma or an increased likelihood that a carcinoma is present in a subject. More particularly, the present invention discloses methods for diagnosis, screening, treatment and monitoring of carcinomas associated with aberrant DNA methylation of the MED15 promoter region
Resumo:
Background Genetic testing is recommended when the probability of a disease-associated germline mutation exceeds 10%. Germline mutations are found in approximately 25% of individuals with phaeochromcytoma (PCC) or paraganglioma (PGL); however, genetic heterogeneity for PCC/PGL means many genes may require sequencing. A phenotype-directed iterative approach may limit costs but may also delay diagnosis, and will not detect mutations in genes not previously associated with PCC/PGL. Objective To assess whether whole exome sequencing (WES) was efficient and sensitive for mutation detection in PCC/PGL. Methods Whole exome sequencing was performed on blinded samples from eleven individuals with PCC/PGL and known mutations. Illumina TruSeq™ (Illumina Inc, San Diego, CA, USA) was used for exome capture of seven samples, and NimbleGen SeqCap EZ v3.0 (Roche NimbleGen Inc, Basel, Switzerland) for five samples (one sample was repeated). Massive parallel sequencing was performed on multiplexed samples. Sequencing data were called using Genome Analysis Toolkit and annotated using annovar. Data were assessed for coding variants in RET, NF1, VHL, SDHD, SDHB, SDHC, SDHA, SDHAF2, KIF1B, TMEM127, EGLN1 and MAX. Target capture of five exome capture platforms was compared. Results Six of seven mutations were detected using Illumina TruSeq™ exome capture. All five mutations were detected using NimbleGen SeqCap EZ v3.0 platform, including the mutation missed using Illumina TruSeq™ capture. Target capture for exons in known PCC/PGL genes differs substantially between platforms. Exome sequencing was inexpensive (<$A800 per sample for reagents) and rapid (results <5 weeks from sample reception). Conclusion Whole exome sequencing is sensitive, rapid and efficient for detection of PCC/PGL germline mutations. However, capture platform selection is critical to maximize sensitivity.
Resumo:
This workshop will snapshot Bourdieu's sociology. In recognition of Bourdieu's work as a powerful theoretical instrument to speculate the reproduction of social orders and cultural values, the workshop will firstly discuss the core concepts of habitus, capital, and field – the foundational triad of Bourdieu's sociology. Although Bourdieu's original work was built on some quantitative studies, his sociology has been largely qualitatively used in education research. Different from the bulk of extant research, the workshop will secondly showcase some quantitative and mixed methods research that uses a Bourdieusian framework. Mindful of such a framework helping understand social practice at a macro level, the workshop will then make an attempt to think through the macro and the micro by weaving together Bourdieu's sociology with Garfinkel's ethnomethodology. The workshop will conclude with some reflections and communications in terms of how to better realise the full value of Bourdieu in education research.
Resumo:
Fractional differential equations are becoming increasingly used as a powerful modelling approach for understanding the many aspects of nonlocality and spatial heterogeneity. However, the numerical approximation of these models is demanding and imposes a number of computational constraints. In this paper, we introduce Fourier spectral methods as an attractive and easy-to-code alternative for the integration of fractional-in-space reaction-diffusion equations described by the fractional Laplacian in bounded rectangular domains ofRn. The main advantages of the proposed schemes is that they yield a fully diagonal representation of the fractional operator, with increased accuracy and efficiency when compared to low-order counterparts, and a completely straightforward extension to two and three spatial dimensions. Our approach is illustrated by solving several problems of practical interest, including the fractional Allen–Cahn, FitzHugh–Nagumo and Gray–Scott models, together with an analysis of the properties of these systems in terms of the fractional power of the underlying Laplacian operator.
Resumo:
Whole genome sequences are generally accepted as excellent tools for studying evolutionary relationships. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignments could not be directly applied to the whole-genome comparison and phylogenomic studies. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. The “distances” used in these alignment-free methods are not proper distance metrics in the strict mathematical sense. In this study, we first review them in a more general frame — dissimilarity. Then we propose some new dissimilarities for phylogenetic analysis. Last three genome datasets are employed to evaluate these dissimilarities from a biological point of view.
Resumo:
In this article, natural convection boundary layer flow is investigated over a semi-infinite horizontal wavy surface. Such an irregular (wavy) surface is used to exchange heat with an external radiating fluid which obeys Rosseland diffusion approximation. The boundary layer equations are cast into dimensionless form by introducing appropriate scaling. Primitive variable formulations (PVF) and stream function formulations (SFF) are independently used to transform the boundary layer equations into convenient form. The equations obtained from the former formulations are integrated numerically via implicit finite difference iterative scheme whereas equations obtained from lateral formulations are simulated through Keller-box scheme. To validate the results, solutions produced by above two methods are compared graphically. The main parameters: thermal radiation parameter and amplitude of the wavy surface are discussed categorically in terms of shear stress and rate of heat transfer. It is found that wavy surface increases heat transfer rate compared to the smooth wall. Thus optimum heat transfer is accomplished when irregular surface is considered. It is also established that high amplitude of the wavy surface in the boundary layer leads to separation of fluid from the plate.
Resumo:
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.
Resumo:
Background Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. Methods We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Results Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Conclusions Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Resumo:
The effects of tillage practises and the methods of chemical application on atrazine and alachlor losses through run-off were evaluated for five treatments: conservation (untilled) and surface (US), disk and surface, plow and surface, disk and preplant-incorporated, and plow and preplant-incorporated treatments. A rainfall simulator was used to create 63.5 mm h-1 of rainfall for 60 min and 127 mm h-1 for 15 min. Rainfall simulation occurred 24-36 h after chemical application. There was no significant difference in the run-off volume among the treatments but the untilled treatment significantly reduced erosion loss. The untilled treatments had the highest herbicide concentration and the disk treatments were higher than the plow treatments. The surface treatments showed a higher concentration than the incorporated treatments. The concentration of herbicides in the water decreased with time. Among the experimental sites, the one with sandy loam soil produced the greatest losses, both in terms of the run-off volume and herbicide loss. The US treatments had the highest loss and the herbicide incorporation treatments had smaller losses through run-off as the residue cover was effective in preventing herbicide losses. Incorporation might be a favorable method of herbicide application to reduce the herbicide losses by run-off.