950 resultados para zeta regularization
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.
Resumo:
In this paper we examine the problem of prediction with expert advice in a setup where the learner is presented with a sequence of examples coming from different tasks. In order for the learner to be able to benefit from performing multiple tasks simultaneously, we make assumptions of task relatedness by constraining the comparator to use a lesser number of best experts than the number of tasks. We show how this corresponds naturally to learning under spectral or structural matrix constraints, and propose regularization techniques to enforce the constraints. The regularization techniques proposed here are interesting in their own right and multitask learning is just one application for the ideas. A theoretical analysis of one such regularizer is performed, and a regret bound that shows benefits of this setup is reported.
Resumo:
We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Markov Decision Process (MDP). The algorithm proceeds in episodes where, in each episode, it picks a policy using regularization based on the span of the optimal bias vector. For an MDP with S states and A actions whose optimal bias vector has span bounded by H, we show a regret bound of ~ O(HS p AT ). We also relate the span to various diameter-like quantities associated with the MDP, demonstrating how our results improve on previous regret bounds.
Resumo:
We consider a stochastic regularization method for solving the backward Cauchy problem in Banach spaces. An order of convergence is obtained on sourcewise representative elements.
Resumo:
We address the problem of constructing randomized online algorithms for the Metrical Task Systems (MTS) problem on a metric δ against an oblivious adversary. Restricting our attention to the class of “work-based” algorithms, we provide a framework for designing algorithms that uses the technique of regularization. For the case when δ is a uniform metric, we exhibit two algorithms that arise from this framework, and we prove a bound on the competitive ratio of each. We show that the second of these algorithms is ln n + O(loglogn) competitive, which is the current state-of-the art for the uniform MTS problem.
Resumo:
Background Certain genes from the glutathione S-transferase superfamily have been associated with several cancer types. It was the objective of this study to determine whether alleles of the glutathione S-transferase zeta 1 (GSTZ1) gene are associated with the development of sporadic breast cancer. Methods DNA samples obtained from a Caucasian population affected by breast cancer and a control population, matched for age and ethnicity, were genotyped for a polymorphism of the GSTZ1 gene. After PCR, alleles were identified by restriction enzyme digestion and results analysed by chi-square and CLUMP analysis. Results Chi-squared analysis gave a χ2 value of 4.77 (three degrees of freedom) with P = 0.19, and CLUMP analysis gave a T1 value of 9.02 with P = 0.45 for genotype frequencies and a T1 value of 4.77 with P = 0.19 for allele frequencies. Conclusion Statistical analysis indicates that there is no association of the GSTZ1 variant and hence the gene does not appear to play a significant role in the development of sporadic breast cancer.
Resumo:
Electrostatic spinning or electrospinning is a fiber spinning technique driven by a high-voltage electric field that produces fibers with diameters in a submicrometer to nanometer range.1 Nanofibers are typical one-dimensional colloidal objects with an increased tensile strength, whose length can achieve a few kilometers and the specific surface area can be 100 m2 g–1 or higher.2 Nano- and microfibers from biocompatible polymers and biopolymers have received much attention in medical applications3 including biomedical structural elements (scaffolding used in tissue engineering,2,4–6 wound dressing,7 artificial organs and vascular grafts8), drug and vaccine delivery,9–11 protective shields in speciality fabrics, multifunctional membranes, etc. Other applications concern superhydrophobic coatings,12 encapsulation of solid materials,13 filter media for submicron particles in separation industry, composite reinforcement and structures for nano-electronic machines.
Resumo:
Controlling the morphological structure of titanium dioxide (TiO 2) is crucial for obtaining superior power conversion efficiency for dye-sensitized solar cells. Although the sol-gel-based process has been developed for this purpose, there has been limited success in resisting the aggregation of nanostructured TiO2, which could act as an obstacle for mass production. Herein, we report a simple approach to improve the efficiency of dye-sensitized solar cells (DSSC) by controlling the degree of aggregation and particle surface charge through zeta potential analysis. We found that different aqueous colloidal conditions, i.e., potential of hydrogen (pH), water/titanium alkoxide (titanium isopropoxide) ratio, and surface charge, obviously led to different particle sizes in the range of 10-500 nm. We have also shown that particles prepared under acidic conditions are more effective for DSSC application regarding the modification of surface charges to improve dye loading and electron injection rate properties. Power conversion efficiency of 6.54%, open-circuit voltage of 0.73 V, short-circuit current density of 15.32 mA/cm2, and fill factor of 0.73 were obtained using anatase TiO 2 optimized to 10-20 nm in size, as well as by the use of a compact TiO2 blocking layer.
Resumo:
Authors perform zeta potential studies on hematite, corundum, and quartz samples using starches to understand the adsorption behavior of polymeric starch flocculants at the oxide mineral-solution interface and to correlate this information with their flocculation characteristics and investigate effects of pH and CaCl#72 on zeta potential of Fe ore minerals.
Resumo:
An adaptive regularization algorithm that combines elementwise photon absorption and data misfit is proposed to stabilize the non-linear ill-posed inverse problem. The diffuse photon distribution is low near the target compared to the normal region. A Hessian is proposed based on light and tissue interaction, and is estimated using adjoint method by distributing the sources inside the discretized domain. As iteration progresses, the photon absorption near the inhomogeneity becomes high and carries more weightage to the regularization matrix. The domain's interior photon absorption and misfit based adaptive regularization method improves quality of the reconstructed Diffuse Optical Tomographic images.
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
Periodic-finite-type shifts (PFT's) are sofic shifts which forbid the appearance of finitely many pre-specified words in a periodic manner. The class of PFT's strictly includes the class of shifts of finite type (SFT's). The zeta function of a PET is a generating function for the number of periodic sequences in the shift. For a general sofic shift, there exists a formula, attributed to Manning and Bowen, which computes the zeta function of the shift from certain auxiliary graphs constructed from a presentation of the shift. In this paper, we derive an interesting alternative formula computable from certain ``word-based graphs'' constructed from the periodically-forbidden word description of the PET. The advantages of our formula over the Manning-Bowen formula are discussed.