137 resultados para Gradient descent algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The positive relationship between household income and child health is well documented in the child health literature but the precise mechanisms via which income generates better health and whether the income gradient is increasing in child age are not well understood. This paper presents new Australian evidence on the child health–income gradient. We use data from the Longitudinal Study of Australian Children (LSAC), which involved two waves of data collection for children born between March 2003 and February 2004 (B-Cohort: 0–3 years), and between March 1999 and February 2000 (K-Cohort: 4–7 years). This data set allows us to test the robustness of some of the findings of the influential studies of Case et al. [Case, A., Lubotsky, D., Paxson, C., 2002. Economic status and health in childhood: the origins of the gradient. The American Economic Review 92 (5) 1308–1344] and Currie and Stabile [Currie, J., Stabile, M., 2003. Socioeconomic status and child health: why is the relationship stronger for older children. The American Economic Review 93 (5) 1813–1823], and a recent study by Currie et al. [Currie, A., Shields, M.A., Price, S.W., 2007. The child health/family income gradient: evidence from England. Journal of Health Economics 26 (2) 213–232]. The richness of the LSAC data set also allows us to conduct further exploration of the determinants of child health. Our results reveal an increasing income gradient by child age using similar covariates to Case et al. [Case, A., Lubotsky, D., Paxson, C., 2002. Economic status and health in childhood: the origins of the gradient. The American Economic Review 92 (5) 1308–1344]. However, the income gradient disappears if we include a rich set of controls. Our results indicate that parental health and, in particular, the mother's health plays a significant role, reducing the income coefficient to zero; suggesting an underlying mechanism that can explain the observed relationship between child health and family income. Overall, our results for Australian children are similar to those produced by Propper et al. [Propper, C., Rigg, J., Burgess, S., 2007. Child health: evidence on the roles of family income and maternal mental health from a UK birth cohort. Health Economics 16 (11) 1245–1269] on their British child cohort.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature to date shows that children from poorer households tend to have worse health than their peers, and the gap between them grows with age. We investigate whether and how health shocks (as measured by the onset of chronic conditions) contribute to the income–child health gradient and whether the contemporaneous or cumulative effects of income play important mitigating roles. We exploit a rich panel dataset with three panel waves called the Longitudinal Study of Australian children. Given the availability of three waves of data, we are able to apply a range of econometric techniques (e.g. fixed and random effects) to control for unobserved heterogeneity. The paper makes several contributions to the extant literature. First, it shows that an apparent income gradient becomes relatively attenuated in our dataset when the cumulative and contemporaneous effects of household income are distinguished econometrically. Second, it demonstrates that the income–child health gradient becomes statistically insignificant when controlling for parental health and health-related behaviours or unobserved heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article aims to fill in the gap of the second-order accurate schemes for the time-fractional subdiffusion equation with unconditional stability. Two fully discrete schemes are first proposed for the time-fractional subdiffusion equation with space discretized by finite element and time discretized by the fractional linear multistep methods. These two methods are unconditionally stable with maximum global convergence order of $O(\tau+h^{r+1})$ in the $L^2$ norm, where $\tau$ and $h$ are the step sizes in time and space, respectively, and $r$ is the degree of the piecewise polynomial space. The average convergence rates for the two methods in time are also investigated, which shows that the average convergence rates of the two methods are $O(\tau^{1.5}+h^{r+1})$. Furthermore, two improved algorithms are constrcted, they are also unconditionally stable and convergent of order $O(\tau^2+h^{r+1})$. Numerical examples are provided to verify the theoretical analysis. The comparisons between the present algorithms and the existing ones are included, which show that our numerical algorithms exhibit better performances than the known ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic “Propagate”, which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme,optimal or near-optimal solutions can be identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis in software engineering presents a novel automated framework to identify similar operations utilized by multiple algorithms for solving related computing problems. It provides a new effective solution to perform multi-application based algorithm analysis, employing fundamentally light-weight static analysis techniques compared to the state-of-art approaches. Significant performance improvements are achieved across the objective algorithms through enhancing the efficiency of the identified similar operations, targeting discrete application domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A description of a computer program to analyse cine angiograms of the heart and pressure waveforms to calculate valve gradients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although, PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, the authors have included a classification of the approaches and they have identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genome-wide association studies (GWAS) have identified around 60 common variants associated with multiple sclerosis (MS), but these loci only explain a fraction of the heritability of MS. Some missing heritability may be caused by rare variants that have been suggested to play an important role in the aetiology of complex diseases such as MS. However current genetic and statistical methods for detecting rare variants are expensive and time consuming. 'Population-based linkage analysis' (PBLA) or so called identity-by-descent (IBD) mapping is a novel way to detect rare variants in extant GWAS datasets. We employed BEAGLE fastIBD to search for rare MS variants utilising IBD mapping in a large GWAS dataset of 3,543 cases and 5,898 controls. We identified a genome-wide significant linkage signal on chromosome 19 (LOD = 4.65; p = 1.9×10-6). Network analysis of cases and controls sharing haplotypes on chromosome 19 further strengthened the association as there are more large networks of cases sharing haplotypes than controls. This linkage region includes a cluster of zinc finger genes of unknown function. Analysis of genome wide transcriptome data suggests that genes in this zinc finger cluster may be involved in very early developmental regulation of the CNS. Our study also indicates that BEAGLE fastIBD allowed identification of rare variants in large unrelated population with moderate computational intensity. Even with the development of whole-genome sequencing, IBD mapping still may be a promising way to narrow down the region of interest for sequencing priority. © 2013 Lin et al.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to evaluate the mechanical triggers that may cause plaque rupture. Wall shear stress (WSS) and pressure gradient are the direct mechanical forces acting on the plaque in a stenotic artery. Their influence on plaque stability is thought to be controversial. This study used a physiologically realistic, pulsatile flow, two-dimensional, cine phase-contrast MRI sequence in a patient with a 70% carotid stenosis. Instead of considering the full patient-specific carotid bifurcation derived from MRI, only the plaque region has been modelled by means of the idealised flow model. WSS reached a local maximum just distal to the stenosis followed by a negative local minimum. A pressure drop across the stenosis was found which varied significantly during systole and diastole. The ratio of the relative importance of WSS and pressure was assessed and was found to be less than 0.07% for all time phases, even at the throat of the stenosis. In conclusion, although the local high WSS at the stenosis may damage the endothelium and fissure plaque, the magnitude of WSS is small compared with the overall loading on plaque. Therefore, pressure may be the main mechanical trigger for plaque rupture and risk stratification using stress analysis of plaque stability may only need to consider the pressure effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).