992 resultados para Trust-region


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its `pure` counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/similar to egbirgin/tango/.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEB

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La tomosintesi digitale computerizzata è una particolare tecnica che permette di ricostruire una rappresentazione 3D di un oggetto, con un numero finito di proiezioni su un range angolare limitato, sfruttando le convenzionali attrezzature digitali a raggi X. In questa tesi è stato descritto un modello matematico per la ricostruzione dell’immagine della mammella nella tomosintesi digitale polienergetica che tiene conto della varietà di materiali che compongono l’oggetto e della natura polienergetica del fascio di raggi X. Utilizzando questo modello polienergetico-multimateriale, la ricostruzione dell’immagine di tomosintesi è stata ricondotta alla formulazione di un problema dei minimi quadrati non lineare su larga scala e risolverlo ha permesso la ricostruzione delle percentuali dei materiali del volume assegnato. Nelle sperimentazioni sono stati implementati il metodo del gradiente, il metodo di Gauss-Newton ed il metodo di Gauss-Newton CGLS. E' stato anche utilizzato l’algoritmo trust region reflective implementato nella funzione lsqnonlin di MATLAB. Il problema della ricostruzione dell'immagine di tomosintesi è stato risolto utilizzando questi quattro metodi ed i risultati ottenuti sono stati confrontati tra di loro.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The economic design of a distillation column or distillation sequences is a challenging problem that has been addressed by superstructure approaches. However, these methods have not been widely used because they lead to mixed-integer nonlinear programs that are hard to solve, and require complex initialization procedures. In this article, we propose to address this challenging problem by substituting the distillation columns by Kriging-based surrogate models generated via state of the art distillation models. We study different columns with increasing difficulty, and show that it is possible to get accurate Kriging-based surrogate models. The optimization strategy ensures that convergence to a local optimum is guaranteed for numerical noise-free models. For distillation columns (slightly noisy systems), Karush–Kuhn–Tucker optimality conditions cannot be tested directly on the actual model, but still we can guarantee a local minimum in a trust region of the surrogate model that contains the actual local minimum.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inverse analysis for reactive transport of chlorides through concrete in the presence of electric field is presented. The model is solved using MATLAB’s built-in solvers “pdepe.m” and “ode15s.m”. The results from the model are compared with experimental measurements from accelerated migration test and a function representing the lack of fit is formed. This function is optimised with respect to varying amount of key parameters defining the model. Levenberg-Marquardt trust-region optimisation approach is employed. The paper presents a method by which the degree of inter-dependency between parameters and sensitivity (significance) of each parameter towards model predictions can be studied on models with or without clearly defined governing equations. Eigen value analysis of the Hessian matrix was employed to investigate and avoid over-parametrisation in inverse analysis. We investigated simultaneous fitting of parameters for diffusivity, chloride binding as defined by Freundlich isotherm (thermodynamic) and binding rate (kinetic parameter). Fitting of more than 2 parameters, simultaneously, demonstrates a high degree of parameter inter-dependency. This finding is significant as mathematical models for representing chloride transport rely on several parameters for each mode of transport (i.e., diffusivity, binding, etc.), which combined may lead to unreliable simultaneous estimation of parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This report describes the Get Into Vocational Education (GIVE) pilot project run in the Rockhampton Region at two schools in 2011. The report includes a description of the project, including its aims, budget, and timeline; and the findings in relation to each of the three major objectives of the project, namely (a) build awareness of, interest in, and familiarity with, trades as a future vocation and opportunity for advancement; (b) enhance literacy, numeracy and science knowledge and performance; and (c) provide motivation and engagement to stay on at school and build towards a productive future. The clear findings of the GIVE Rockhampton Region pilot project are that, for students at risk in terms of school attendance, engagement and learning: (1) awareness of trade practices in horticulture, hospitality, retail, and design and engineering, literacy, mathematics and science knowledge, and motivation and engagement all improve and, in most cases, dramatically improve, in the GIVE structure; and (2) the crucial factor in the GIVE structure that gives the improvement is the integration of classroom work with trades experiences and not the classroom and trades experiences themselves (although it is better if these are good).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of the present study was to explore the associations between good self-rated health and economic and social factors in different regions among ageing people in the Päijät-Häme region in southern Finland. The data of this study were collected in 2002 as part of the research and development project Ikihyvä 2002 2012 (Good Ageing in Lahti region GOAL project). The baseline data set consisted of 2,815 participants born in 1926 30, 1936 40, and 1946 50. The response rate was 66 %. According to the previous studies, trust in other people and social participation as the main aspects of social capital are associated with self-rated health. In addition, socioeconomic position (SEP) and self-rated health are associated, but all SEP indicators do not have identical associations with health. However, there is a lack of knowledge of the health associations and regional differences with these factors, especially among ageing people. Regarding these questions, the present study gives new information. According to the results of this study, self-perceived adequacy of income was significantly associated with good self-rated health, especially in the urban areas. Similar associations were found in the rural areas, though education was also considered an important factor. Adequacy of income was an even stronger predictor of good health than the actual income. Women had better self-rated health than men only in the urban areas. The youngest respondents had quite equally better self-rated health than the others. Social participation and access to help when needed were associated with good self-rated health, especially in the urban area and the sparsely populated rural areas. The result was comparable in the rural population centres. The correlation of trust with self-rated health was significant in the urban area. High social capital was associated with good self-rated health in the urban area. The association was quite similar in the other areas, though it was statistically insignificant. High social capital consisted of co-existent high social participation and high trust. The association of traditionalism (low participation and high trust) with self-rated health was also substantial in the urban area. The associations of self-rated health with low social capital (low participation and low trust) and the miniaturisation of community (high participation and low trust) were less significant. From the forms of single participation, going to art exhibitions, theatre, movies, and concerts among women, and studying and self-development among men were positively related to self-rated health. Unexpectedly, among women, active participation in religious events and voluntary work was negatively associated with self-rated health. This may indicate a coping method with ill-health. As a whole, only minor variations in self-rated health were found between the areas. However, the significance of the factors associated with self-rated health varied according to the areas. Economic factors, especially self-perceived adequacy of income was strongly associated with good self-rated health. Also when adjusting for economic and several other background factors social factors (particularly high social capital, social participation, and access to help when needed) were associated with self-rated health. Thus, economic and social factors have a significant relation with the health of the ageing, and improving these factors may have favourable effects on health among ageing people.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Este artigo é parte do relatório Cybersecurity Are We Ready in Latin America and the Caribbean?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ulcerative colitis is a common form of inflammatory bowel disease with a complex etiology. As part of the Wellcome Trust Case Control Consortium 2, we performed a genome-wide association scan for ulcerative colitis in 2,361 cases and 5,417 controls. Loci showing evidence of association at P 1 × 10 5 were followed up by genotyping in an independent set of 2,321 cases and 4,818 controls. We find genome-wide significant evidence of association at three new loci, each containing at least one biologically relevant candidate gene, on chromosomes 20q13 (HNF4A; P = 3.2 × 10 17), 16q22 (CDH1 and CDH3; P = 2.8 × 10 8) and 7q31 (LAMB1; P = 3.0 × 10 8). Of note, CDH1 has recently been associated with susceptibility to colorectal cancer, an established complication of longstanding ulcerative colitis. The new associations suggest that changes in the integrity of the intestinal epithelial barrier may contribute to the pathogenesis of ulcerative colitis. © 2009 Nature America, Inc. All rights reserved.