965 resultados para Chebyshev Polynomial Approximation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis proves certain results concerning an important question in non-equilibrium quantum statistical mechanics which is the derivation of effective evolution equations approximating the dynamics of a system of large number of bosons initially at equilibrium (ground state at very low temperatures). The dynamics of such systems are governed by the time-dependent linear many-body Schroedinger equation from which it is typically difficult to extract useful information due to the number of particles being large. We will study quantitatively (i.e. with explicit bounds on the error) how a suitable one particle non-linear Schroedinger equation arises in the mean field limit as number of particles N → ∞ and how the appropriate corrections to the mean field will provide better approximations of the exact dynamics. In the first part of this thesis we consider the evolution of N bosons, where N is large, with two-body interactions of the form N³ᵝv(Nᵝ⋅), 0≤β≤1. The parameter β measures the strength and the range of interactions. We compare the exact evolution with an approximation which considers the evolution of a mean field coupled with an appropriate description of pair excitations, see [18,19] by Grillakis-Machedon-Margetis. We extend the results for 0 ≤ β < 1/3 in [19, 20] to the case of β < 1/2 and obtain an error bound of the form p(t)/Nᵅ, where α>0 and p(t) is a polynomial, which implies a specific rate of convergence as N → ∞. In the second part, utilizing estimates of the type discussed in the first part, we compare the exact evolution with the mean field approximation in the sense of marginals. We prove that the exact evolution is close to the approximate in trace norm for times of the order o(1)√N compared to log(o(1)N) as obtained in Chen-Lee-Schlein [6] for the Hartree evolution. Estimates of similar type are obtained for stronger interactions as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the question of the rates of convergence of the p-version interior penalty discontinuous Galerkin method (p-IPDG) for second order elliptic problems with non-homogeneous Dirichlet boundary conditions. It is known that the p-IPDG method admits slightly suboptimal a-priori bounds with respect to the polynomial degree (in the Hilbertian Sobolev space setting). An example for which the suboptimal rate of convergence with respect to the polynomial degree is both proven theoretically and validated in practice through numerical experiments is presented. Moreover, the performance of p- IPDG on the related problem of p-approximation of corner singularities is assessed both theoretically and numerically, witnessing an almost doubling of the convergence rate of the p-IPDG method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial optimization is a complex engineering subject. Although formulation often depends on the nature of problems that differs from their setup, design, constraints, and implications, establishing a unifying framework is essential. This dissertation investigates the unique features of three important optimization problems that can span from small-scale design automation to large-scale power system planning: (1) Feeder remote terminal unit (FRTU) planning strategy by considering the cybersecurity of secondary distribution network in electrical distribution grid, (2) physical-level synthesis for microfluidic lab-on-a-chip, and (3) discrete gate sizing in very-large-scale integration (VLSI) circuit. First, an optimization technique by cross entropy is proposed to handle FRTU deployment in primary network considering cybersecurity of secondary distribution network. While it is constrained by monetary budget on the number of deployed FRTUs, the proposed algorithm identi?es pivotal locations of a distribution feeder to install the FRTUs in different time horizons. Then, multi-scale optimization techniques are proposed for digital micro?uidic lab-on-a-chip physical level synthesis. The proposed techniques handle the variation-aware lab-on-a-chip placement and routing co-design while satisfying all constraints, and considering contamination and defect. Last, the first fully polynomial time approximation scheme (FPTAS) is proposed for the delay driven discrete gate sizing problem, which explores the theoretical view since the existing works are heuristics with no performance guarantee. The intellectual contribution of the proposed methods establishes a novel paradigm bridging the gaps between professional communities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, equivalence constants between various polynomial norms are calculated. As an application, we also obtain sharp values of the Hardy Littlewood constants for 2-homogeneous polynomials on l(p)(2) spaces, 2 < p <= infinity. We also provide lower estimates for the Hardy-Littlewood constants for polynomials of higher degrees.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compute the E-polynomials of the moduli spaces of representations of the fundamental group of a complex curve of genus g = 3 into SL(2, C), and also of the moduli space of twisted representations. The case of genus g = 1, 2 has already been done in [12]. We follow the geometric technique introduced in [12], based on stratifying the space of representations, and on the analysis of the behaviour of the E-polynomial under fibrations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work the fundamental ideas to study properties of QFTs with the functional Renormalization Group are presented and some examples illustrated. First the Wetterich equation for the effective average action and its flow in the local potential approximation (LPA) for a single scalar field is derived. This case is considered to illustrate some techniques used to solve the RG fixed point equation and study the properties of the critical theories in D dimensions. In particular the shooting methods for the ODE equation for the fixed point potential as well as the approach which studies a polynomial truncation with a finite number of couplings, which is convenient to study the critical exponents. We then study novel cases related to multi field scalar theories, deriving the flow equations for the LPA truncation, both without assuming any global symmetry and also specialising to cases with a given symmetry, using truncations based on polynomials of the symmetry invariants. This is used to study possible non perturbative solutions of critical theories which are extensions of known perturbative results, obtained in the epsilon expansion below the upper critical dimension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first theoretical results of core-valence correlation effects are presented for the infrared wavenumbers and intensities of the BF3 and BCl3 molecules, using (double- and triple-zeta) Dunning core-valence basis sets at the CCSD(T) level. The results are compared with those calculated in the frozen core approximation with standard Dunning basis sets at the same correlation level and with the experimental values. The general conclusion is that the effect of core-valence correlation is, for infrared wavenumbers and intensities, smaller than the effect of adding augmented diffuse functions to the basis set, e.g., cc-pVTZ to aug-cc-pVTZ. Moreover, the trends observed in the data are mainly related to the augmented functions rather than the core-valence functions added to the basis set. The results obtained here confirm previous studies pointing out the large descrepancy between the theoretical and experimental intensities of the stretching mode for BCl3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

THE PURPOSE OF THIS STUDY WAS TO PROPOSE A SPECIFIC LACTATE MINIMUM TEST FOR ELITE BASKETBALL PLAYERS CONSIDERING THE: Running Anaerobic Sprint Test (RAST) as a hyperlactatemia inductor, short distances (specific distance, 20 m) during progressive intensity and mathematical analysis to interpret aerobic and anaerobic variables. The basketball players were assigned to four groups: All positions (n=26), Guard (n= 7), Forward (n=11) and Center (n=8). The hyperlactatemia elevation (RAST) method consisted of 6 maximum sprints over 35 m separated by 10 s of recovery. The progressive phase of the lactate minimum test consisted of 5 stages controlled by an electronic metronome (8.0, 9.0, 10.0, 11.0 and 12.0 km/h) over a 20 m distance. The RAST variables and the lactate values were analyzed using visual and mathematical models. The intensity of the lactate minimum test, determined by a visual method, reduced in relation to polynomial fits (2nd degree) for the Small Forward positions and General groups. The Power and Fatigue Index values, determined by both methods, visual and 3rd degree polynomial, were not significantly different between the groups. In conclusion, the RAST is an excellent hyperlactatemia inductor and the progressive intensity of lactate minimum test using short distances (20 m) can be specifically used to evaluate the aerobic capacity of basketball players. In addition, no differences were observed between the visual and polynomial methods for RAST variables, but lactate minimum intensity was influenced by the method of analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study sought to analyse the behaviour of the average spinal posture using a novel investigative procedure in a maximal incremental effort test performed on a treadmill. Spine motion was collected via stereo-photogrammetric analysis in thirteen amateur athletes. At each time percentage of the gait cycle, the reconstructed spine points were projected onto the sagittal and frontal planes of the trunk. On each plane, a polynomial was fitted to the data, and the two-dimensional geometric curvature along the longitudinal axis of the trunk was calculated to quantify the geometric shape of the spine. The average posture presented at the gait cycle defined the spine Neutral Curve. This method enabled the lateral deviations, lordosis, and kyphosis of the spine to be quantified noninvasively and in detail. The similarity between each two volunteers was a maximum of 19% on the sagittal plane and 13% on the frontal (p<0.01). The data collected in this study can be considered preliminary evidence that there are subject-specific characteristics in spinal curvatures during running. Changes induced by increases in speed were not sufficient for the Neutral Curve to lose its individual characteristics, instead behaving like a postural signature. The data showed the descriptive capability of a new method to analyse spinal postures during locomotion; however, additional studies, and with larger sample sizes, are necessary for extracting more general information from this novel methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The chemical amount values vary in a discrete or continuous form, depending on the approach used to describe the system. In classical sciences, the chemical amount is a property of the macroscopic system and, like any other property of the system, it varies continuously. This is neither inconsistent with the concept of indivisible particles forming the system, nor a mere approximation, but it is a sound concept which enables the use of differential calculus, for instance, in chemical thermodynamics. It is shown that the fundamental laws of chemistry are absolutely compatible to the continuous concept of the chemical amount.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for leaf vein shape characterization using Hermite polynomial cubic representation. The elements associated with this representation are used as secondary vein descriptors and their discriminatory potential are analyzed based on the identification of two legume species (Lonchocarpus muehlbergianus Hassl. and L. subglaucescens Mart, ex Benth.). The elements of Hermite geometry influence a curve along all its extension allowing a global description of the secondary vein course by a descriptor of low dimensionality. The obtained results shown the analyzed species can be discriminated by this method and it can be used in addition to commonly considered elements in the taxonomic process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To adapted the critical velocity (CV), RAST test and lactate minimum (LM) to evaluation of female basketball players. METHODS: Twelve well-trained female basketball players (19 ± 1yrs) were submitted to four intensities running (10 - 14 km/h) at shuttle exercise until exhaustion, applied on alternate days. The linear model 'velocity vs. 1/tlim' was adopted to determine the aerobic (CV) and anaerobic (CCA) parameters. The lactate minimum test consisted of two phases: 1) hiperlactatemia induction using the RAST test and 2) incremental test composed by five shuttle run (20-m) at 7, 8, 9, 10, and 12 km/h. Blood samples were collected at the end of each stage. RESULTS: The velocity (vLM) and blood lactate concentration at LM were obtained by two polynomial adjustments: lactate vs. intensity (LM1) and lactate vs. time (LM2). ANOVA one-way, Student t-test and Pearson correlation were used for statistical analysis. The CV was obtained at 10.3 ± 0.2 km/h and the CCA estimated at 73.0 ± 3.4 m. The RAST was capable to induce the hiperlactatemia and to determine the Pmax (3.6 ± 0.2 W/kg), Pmed (2.8 ± 0.1 W/kg), Pmin (2.3 ± 0.1 W/kg) and FI (30 ± 3%). The vLM1 and vLM2 were obtained, respectively, at 9.47 ±0.13 km/h and 9.8 ± 0.13 km/h, and CV was higher than vLM1. CONCLUSION: The results suggest that the non-invasive model can be used to determine the aerobic and anaerobic parameters. Furthermore, the LM test adapted to basketball using RAST and progressive phase was effective to evaluate female athletes considering the specificity of modality, with high success rates observed in polynomial adjustment 'lactate vs. time' (LM2).