179 resultados para Factorization
Resumo:
In the recently proposed framework of hard pion chiral perturbation theory, the leading chiral logarithms are predicted to factorize with respect to the energy dependence in the chiral limit. We have scrutinized this assumption in the case of vector and scalar pion form factors FV;S(s) by means of standard chiral perturbation theory and dispersion relations. We show that this factorization property is valid for the elastic contribution to the dispersion integrals for FV;S(s) but it is violated starting at three loops when the inelastic four-pion contributions arise.
Resumo:
We analyze transverse thrust in the framework of Soft Collinear Effective Theory and obtain a factorized expression for the cross section that permits resummation of terms enhanced in the dijet limit to arbitrary accuracy. The factorization theorem for this hadron-collider event-shape variable involves collinear emissions at different virtualities and suffers from a collinear anomaly. We compute all its ingredients at the one-loop order, and show that the two-loop input for next-to-next-to-leading logarithmic accuracy can be extracted numerically, from existing fixed-order codes.
Resumo:
Medical imaging has become an absolutely essential diagnostic tool for clinical practices; at present, pathologies can be detected with an earliness never before known. Its use has not only been relegated to the field of radiology but also, increasingly, to computer-based imaging processes prior to surgery. Motion analysis, in particular, plays an important role in analyzing activities or behaviors of live objects in medicine. This short paper presents several low-cost hardware implementation approaches for the new generation of tablets and/or smartphones for estimating motion compensation and segmentation in medical images. These systems have been optimized for breast cancer diagnosis using magnetic resonance imaging technology with several advantages over traditional X-ray mammography, for example, obtaining patient information during a short period. This paper also addresses the challenge of offering a medical tool that runs on widespread portable devices, both on tablets and/or smartphones to aid in patient diagnostics.
Resumo:
"This work was supported in part by the National Science Foundation under Grant No. GJ812."
Resumo:
Vita.
Resumo:
Thesis (M.S.)--University of Illinois.
Resumo:
The Cunningham project seeks to factor numbers of the form bn±1 with b = 2, 3, . . . small. One of the most useful techniques is Aurifeuillian Factorization whereby such a number is partially factored by replacing bn by a polynomial in such a way that polynomial factorization is possible. For example, by substituting y = 2k into the polynomial factorization (2y2)2+1 = (2y2−2y+1)(2y2+2y+1) we can partially factor 24k+2+1. In 1962 Schinzel gave a list of such identities that have proved useful in the Cunningham project; we believe that Schinzel identified all numbers that can be factored by such identities and we prove this if one accepts our definition of what “such an identity” is. We then develop our theme to similarly factor f(bn) for any given polynomial f, using deep results of Faltings from algebraic geometry and Fried from the classification of finite simple groups.
Resumo:
This paper presents a fast part-based subspace selection algorithm, termed the binary sparse nonnegative matrix factorization (B-SNMF). Both the training process and the testing process of B-SNMF are much faster than those of binary principal component analysis (B-PCA). Besides, B-SNMF is more robust to occlusions in images. Experimental results on face images demonstrate the effectiveness and the efficiency of the proposed B-SNMF.
Resumo:
2000 Mathematics Subject Classification: 13P05, 14M15, 14M17, 14L30.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
We obtain invertibility and Fredholm criteria for the Wiener-Hopf plus Hankel operators acting between variable exponent Lebesgue spaces on the real line. Such characterizations are obtained via the so-called even asymmetric factorization which is applied to the Fourier symbols of the operators under study.
Resumo:
The quantification of sources of carbonaceous aerosol is important to understand their atmospheric concentrations and regulating processes and to study possible effects on climate and air quality, in addition to develop mitigation strategies. In the framework of the European Integrated Project on Aerosol Cloud Climate Interactions (EUCAARI) fine (D(p) < 2.5 mu m) and coarse (2.5 mu m < Dp < 10 mu m) aerosol particles were sampled from February to June (wet season) and from August to September (dry season) 2008 in the central Amazon basin. The mass of fine particles averaged 2.4 mu g m(-3) during the wet season and 4.2 mu g m(-3) during the dry season. The average coarse aerosol mass concentration during wet and dry periods was 7.9 and 7.6 mu g m(-3), respectively. The overall chemical composition of fine and coarse mass did not show any seasonality with the largest fraction of fine and coarse aerosol mass explained by organic carbon (OC); the average OC to mass ratio was 0.4 and 0.6 in fine and coarse aerosol modes, respectively. The mass absorbing cross section of soot was determined by comparison of elemental carbon and light absorption coefficient measurements and it was equal to 4.7 m(2) g(-1) at 637 nm. Carbon aerosol sources were identified by Positive Matrix Factorization (PMF) analysis of thermograms: 44% of fine total carbon mass was assigned to biomass burning, 43% to secondary organic aerosol (SOA), and 13% to volatile species that are difficult to apportion. In the coarse mode, primary biogenic aerosol particles (PBAP) dominated the carbonaceous aerosol mass. The results confirmed the importance of PBAP in forested areas. The source apportionment results were employed to evaluate the ability of global chemistry transport models to simulate carbonaceous aerosol sources in a regional tropical background site. The comparison showed an overestimation of elemental carbon (EC) by the TM5 model during the dry season and OC both during the dry and wet periods. The overestimation was likely due to the overestimation of biomass burning emission inventories and SOA production over tropical areas.
Resumo:
Heavy quark production has been very well studied over the last years both theoretically and experimentally. Theory has been used to study heavy quark production in ep collisions at HERA, in pp collisions at Tevatron and RHIC, in pA and dA collisions at RHIC, and in AA collisions at CERN-SPS and RHIC. However, to the best of our knowledge, heavy quark production in eA has received almost no attention. With the possible construction of a high energy electron-ion collider, updated estimates of heavy quark production are needed. We address the subject from the perspective of saturation physics and compute the heavy quark production cross section with the dipole model. We isolate shadowing and nonlinear effects, showing their impact on the charm structure function and on the transverse momentum spectrum.
Resumo:
We investigate the quantum integrability of the Landau-Lifshitz (LL) model and solve the long-standing problem of finding the local quantum Hamiltonian for the arbitrary n-particle sector. The particular difficulty of the LL model quantization, which arises due to the ill-defined operator product, is dealt with by simultaneously regularizing the operator product and constructing the self-adjoint extensions of a very particular structure. The diagonalizibility difficulties of the Hamiltonian of the LL model, due to the highly singular nature of the quantum-mechanical Hamiltonian, are also resolved in our method for the arbitrary n-particle sector. We explicitly demonstrate the consistency of our construction with the quantum inverse scattering method due to Sklyanin [Lett. Math. Phys. 15, 357 (1988)] and give a prescription to systematically construct the general solution, which explains and generalizes the puzzling results of Sklyanin for the particular two-particle sector case. Moreover, we demonstrate the S-matrix factorization and show that it is a consequence of the discontinuity conditions on the functions involved in the construction of the self-adjoint extensions.