889 resultados para Technique de regularization


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To evaluate the outcomes in patients treated for humerus distal third fractures with MIPO technique and visualization of the radial nerve by an accessory approach, in those without radial palsy before surgery. The patients were treated with MIPO technique. The visualization and isolation of the radial nerve was done by an approach between the brachialis and the brachiorradialis, with an oblique incision, in the lateral side of the arm. MEPS was used to evaluate the elbow function. Seven patients were evaluated with a mean age of 29.8 years old. The average follow up was 29.85 months. The radial neuropraxis after surgery occurred in three patients. The sensorial recovery occurred after 3.16 months on average and also of the motor function, after 5.33 months on average, in all patients. We achieved fracture consolidation in all patients (M=4.22 months). The averages for flexion-extension and prono-supination were 112.85° and 145°, respectively. The MEPS average score was 86.42. There was no case of infection. This approach allowed excluding a radial nerve interposition on site of the fracture and/or under the plate, showing a high level of consolidation of the fracture and a good evolution of the range of movement of the elbow. Level of Evidence IV, Case Series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Objective. The aim of this study was to evaluate the alteration of human enamel bleached with high concentrations of hydrogen peroxide associated with different activators. Materials and methods. Fifty enamel/dentin blocks (4 × 4 mm) were obtained from human third molars and randomized divided according to the bleaching procedure (n = 10): G1 = 35% hydrogen peroxide (HP - Whiteness HP Maxx); G2 = HP + Halogen lamp (HL); G3 = HP + 7% sodium bicarbonate (SB); G4 = HP + 20% sodium hydroxide (SH); and G5 = 38% hydrogen peroxide (OXB - Opalescence Xtra Boost). The bleaching treatments were performed in three sessions with a 7-day interval between them. The enamel content, before (baseline) and after bleaching, was determined using an FT-Raman spectrometer and was based on the concentration of phosphate, carbonate, and organic matrix. Statistical analysis was performed using two-way ANOVA for repeated measures and Tukey's test. Results. The results showed no significant differences between time of analysis (p = 0.5175) for most treatments and peak areas analyzed; and among bleaching treatments (p = 0.4184). The comparisons during and after bleaching revealed a significant difference in the HP group for the peak areas of carbonate and organic matrix, and for the organic matrix in OXB and HP+SH groups. Tukey's analysis determined that the difference, peak areas, and the interaction among treatment, time and peak was statistically significant (p < 0.05). Conclusion. The association of activators with hydrogen peroxide was effective in the alteration of enamel, mainly with regards to the organic matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. The possibility of cephalic venous hypertension with the resultant facial edema and elevated cerebrospinal fluid pressure continues to challenge head and neck surgeons who perform bilateral radical neck dissections during simultaneous or staged procedures. Case Report. The staged procedure in patients who require bilateral neck dissections allows collateral venous drainage to develop, mainly through the internal and external vertebral plexuses, thereby minimizing the risks of deleterious consequences. Nevertheless, this procedure has disadvantages, such as a delay in definitive therapy, the need for a second hospitalization and anesthesia, and the risk of cutting lymphatic vessels and spreading viable cancer cells. In this paper, we discuss the rationale and feasibility of preserving the external jugular vein. Considering the limited number of similar reports in the literature, two cases in which this procedure was accomplished are described. The relevant anatomy and technique are reviewed and the patients' outcomes are discussed. Conclusion. Preservation of the EJV during bilateral neck dissections is technically feasible, fast, and safe, with clinically and radiologically demonstrated patency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To assess the prevalence of insulin resistance (IR) and associated factors in contraceptive users. Methods A total of 47 women 18 to 40 years of age with a body mass index (kg/m(2)) < 30, fasting glucose levels < 100 mg/dl and 2-hour glucose level < 140 mg/dl after a 75-g oral glucose load were submitted to a hyperinsulinemic-euglycemic clamp. The women were distributed in tertiles regarding M-values. The analysed variables were use of combined hormonal/non-hormonal contraception, duration of use, body composition, lipid profile, glucose levels and blood pressure. Results IR was detected in 19% of the participants. The women with low M-values presented significantly higher body fat mass, waist-to-hip ratio, fasting insulin, HOMA-IR and were nulligravida, showed > 1 year of contraceptive use and higher triglyceride levels. IR was more frequent among combined oral contraceptive users, however no association was observed after regression analysis. Conclusions The prevalence of IR was high among healthy women attending a family planning clinic independent of the contraceptive method used with possible long-term negative consequences regarding their metabolic and cardiovascular health. Although an association between hormonal contraception and IR could not be found this needs further research. Family planning professionals should be proactive counselling healthy women about the importance of healthy habits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The prosthetic rehabilitation of an atrophic mandible is usually unsatisfactory due to the lack of support tissues, mainly bone and keratinized mucosa for treatment with osseointegrated implants or even conventional prosthesis. The prosthetic instability leads to social and functional limitations and chronic physical trauma decreasing the patient's quality of life. A 53-year-old female patient sought care at our surgical service complaining of impairment of her masticatory function associated with the instability of the lower total prosthetic denture. The clinical and complementary exams revealed edentulism in both arches, while the mandibular arch presented severe reabsorption resulting in denture instability and chronic trauma to the oral mucosa. The proposed treatment plan consisted in the mandibular rehabilitation with osseointegrated implants and fixed Brånemark's protocol prosthesis after mandibular reconstruction applying the modified visor osteotomy technique. The proposed technique offered predictable results for reconstruction of the severely resorbed edentulous mandible and posterior rehabilitation with osseointegrated implants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: Desenvolver um método e um dispositivo para quantificar a visão em candela (cd). Os estudos de medida da visão são importantes para todas as ciências visuais. MÉTODOS: É um estudo teórico e experimental. Foram descritos os detalhes do método psicofísico e da calibração do dispositivo. Foram realizados testes preliminares em voluntários. RESULTADOS: É um teste psicofísico simples e com resultado expresso em unidades do sistema internacional de medidas. Com a descrição técnica será possível reproduzir o experimento em outros centros de pesquisa. CONCLUSÃO: Os resultados aferidos em intensidade luminosa (cd) são uma opção para estudo visual. Esses resultados possibilitarão extrapolar medidas para modelos matemáticos e para simular efeitos individuais com dados aberrométricos.