859 resultados para Boundary value problems on manifolds
Resumo:
2002 Mathematics Subject Classification: 65C05.
Resumo:
The Stokes perturbative solution of the nonlinear (boundary value dependent) surface gravity wave problem is known to provide results of reasonable accuracy to engineers in estimating the phase speed and amplitudes of such nonlinear waves. The weakling in this structure though is the presence of aperiodic “secular variation” in the solution that does not agree with the known periodic propagation of surface waves. This has historically necessitated increasingly higher-ordered (perturbative) approximations in the representation of the velocity profile. The present article ameliorates this long-standing theoretical insufficiency by invoking a compact exact n-ordered solution in the asymptotic infinite depth limit, primarily based on a representation structured around the third-ordered perturbative solution, that leads to a seamless extension to higher-order (e.g., fifth-order) forms existing in the literature. The result from this study is expected to improve phenomenological engineering estimates, now that any desired higher-ordered expansion may be compacted within the same representation, but without any aperiodicity in the spectral pattern of the wave guides.
Resumo:
In the years 2002, 2003 and 2004 we collected samples of macroinvertebrates on a total of 36 occasions in Badacsony bay, in areas of open water (in the years 2003 and 2004 reed-grassy) as well as populated by reed (Phragmites australis) and cattail (Typha angustifolia). Samples were taken using a stiff hand net. The sampling site includes three microhabitats differentiated only by the aquatic plants inhabiting these areas. Our data was gathered from processing 208 individual samples. The quantity of macroinvertebrates is represented by biovolume value based on volume estimates. We can identify taxa in abundant numbers found in all water types and ooze; as well as groups associated with individual microhabitats with various aquatic plants. We can observe a notable difference between the years in the volume of invertebrate macrofauna caused by the drop of water level, and the multiplication of submerged macrophytes. There are smaller differences between the samples taken in reeds and cattail stands. In the second half of 2003 – which was a year of drought – the Najas marina appeared in open waters and allowed to support larger quantities of macroinvertebrates. In 2004 with higher water levels, the Potamogeton perfoliatus occurring in the same area has had an even more significant effect. This type of reed-grass may support the most macroinvertebrates during the summer. From the aspect of diversity relations we may suspect different characteristics. The reeds sampling site proved to be the richest, while the cattail microhabitat is close behind, open water (with submerged macrophytes) is the least diverse microhabitat.
Resumo:
Ezt az írást Piketty kiváló és fontos könyve inspirálta. A könyvnek nemcsak a címe, hanem számos megfogalmazása is azt a várakozást keltheti, hogy a kapitalista rendszer átfogó elemzése kerül az olvasó kezébe. Ehhez képest a tanulmány szerzőjében hiányérzet támadt. A kapitalista rendszernek számos immanens tulajdonsága, veleszületett hajlama van. Közülük hárommal foglakozik a tanulmány. 1. A kapitalizmus egyik alapvonása a dinamizmus, az innováció, a teremtő rombolás. Nem adhat teljes képet a kapitalizmusról az, aki ezt az alapvetően fontos jelenséget ignorálja. 2. A kapitalizmus óhatatlanul nagyfokú egyenlőtlenséget hoz létre; ezt reformokkal kell enyhíteni, de teljesen nem küszöbölhető ki. 3. A kapitalizmus alapvető jellegzetességei, a magántulajdon és a piaci koordináció dominanciája nagy hatású ösztönzési mechanizmusokat hoznak létre, amelyek mind a tulajdonosokat, mind a vállalatok vezetőit innovációra és hatékony működésre serkentik. Az egyik legfontosabb ösztönző a verseny, különösen az oligopolisztikus verseny. A kapitalista rendszernek e három fontos tendenciája között szoros kölcsönhatás érvényesül. Piketty fő témáját, a jövedelem és a vagyon eloszlását nem lehet jól megérteni, ha a másik két tendenciától elszakítva vizsgáljuk. A tanulmány befejezésül ismerteti a szerző saját értékítéleteit a kapitalista rendszer kedvező és káros, igazságtalan vonásairól. _____ This study was inspired by Piketty s excellent and important book. Its title and numerous statements in it arouse in readers expectations of a comprehensive analysis of capitalism. By comparison the author of this paper felt things were lacking. The capitalist system has numerous immanent traits and innate tendencies, of which the study properties three: 1. One basic feature is dynamism, innovation, and creative destruction. No picture of capitalism can be full if this basic aspect is ignored. 2. Capitalism inevitably brings about a high degree of inequality; this must be eased by reforms, but cannot be entirely overcome. 3. The basic characteristics of capitalism - private ownership and the dominance of market coordination - give rise to strong incentive mechanisms that encourage but owners and enterprise executives to innovate and to cooperate effectively. One of the main incentives is competition, especially oligopolistic competition. There are strong mutual effects among these three important tendencies. It is impossible to understand well Piketty s main subject, the distribution of income and wealth, if it is divorced from the other two tendencies. The study ends with its author s own value judgements on the favourable and harmful, unjust attributes of the capitalist system.
Resumo:
We examine assignment games, wherematched pairs of firms and workers create some monetary value to distribute among themselves and the agents aim to maximize their payoff. In the majority of this literature, externalities - in the sense that a pair’s value depends on the pairing of the others - have been neglected. However, inmost applications a firm’s success depends on, say, the success of its rivals and suppliers. Thus, it is natural to ask how the classical results on assignment games are affected by the introduction of externalities? The answer is – dramatically. We find that (i) a problem may have no stable outcome, (ii) stable outcomes can be inefficient (not maximize total value), (iii) efficient outcomes can be unstable, and (iv) the set of stable outcomes may not form a lattice. We show that stable outcomes always exist if agents are "pessimistic." This is a knife-edge result: there are problems in which the slightest optimism by a single pair erases all stable outcomes.
Resumo:
This study was inspired by Piketty’s excellent and important book. Its title and numerous statements in it arouse in readers expectations of a comprehensive analysis of capitalism. By comparison the author of this paper felt important aspects were lacking. The capitalist system has numerous immanent traits and innate tendencies, of which the paper takes a closer look at three properties. 1. One basic feature is dynamism, innovation, and creative destruction. No picture of capitalism can be full if this basic aspect is ignored. 2. Capitalism inevitably brings about a high degree of inequality; this must be eased by reforms, but cannot be entirely overcome. 3. The basic characteristics of capitalism – private ownership and the dominance of market coordination – give rise to strong incentive mechanisms that encourage but owners and enterprise executives to innovate and to cooperate effectively. One of the main incentives is competition, especially oligopolistic competition. There are strong mutual effects among these three important tendencies. It is impossible to understand well Piketty’s main subject, the distribution of income and wealth, if it is divorced from the other two tendencies. The study ends with its author’s own value judgements on the favourable and harmful, unjust attributes of the capitalist system.
Resumo:
In this thesis, a numerical program has been developed to simulate the wave-induced ship motions in the time domain. Wave-body interactions have been studied for various ships and floating bodies through forced motion and free motion simulations in a wide range of wave frequencies. A three-dimensional Rankine panel method is applied to solve the boundary value problem for the wave-body interactions. The velocity potentials and normal velocities on the boundaries are obtained in the time domain by solving the mixed boundary integral equations in relation to the source and dipole distributions. The hydrodynamic forces are calculated by the integration of the instantaneous hydrodynamic pressures over the body surface. The equations of ship motion are solved simultaneously with the boundary value problem for each time step. The wave elevation is computed by applying the linear free surface conditions. A numerical damping zone is adopted to absorb the outgoing waves in order to satisfy the radiation condition for the truncated free surface. A numerical filter is applied on the free surface for the smoothing of the wave elevation. Good convergence has been reached for both forced motion simulations and free motion simulations. The computed added-mass and damping coefficients, wave exciting forces, and motion responses for ships and floating bodies are in good agreement with the numerical results from other programs and experimental data.
Resumo:
Multi-frequency eddy current measurements are employed in estimating pressure tube (PT) to calandria tube (CT) gap in CANDU fuel channels, a critical inspection activity required to ensure fitness for service of fuel channels. In this thesis, a comprehensive characterization of eddy current gap data is laid out, in order to extract further information on fuel channel condition, and to identify generalized applications for multi-frequency eddy current data. A surface profiling technique, generalizable to multiple probe and conductive material configurations has been developed. This technique has allowed for identification of various pressure tube artefacts, has been independently validated (using ultrasonic measurements), and has been deployed and commissioned at Ontario Power Generation. Dodd and Deeds solutions to the electromagnetic boundary value problem associated with the PT to CT gap probe configuration were experimentally validated for amplitude response to changes in gap. Using the validated Dodd and Deeds solutions, principal components analysis (PCA) has been employed to identify independence and redundancies in multi-frequency eddy current data. This has allowed for an enhanced visualization of factors affecting gap measurement. Results of the PCA of simulation data are consistent with the skin depth equation, and are validated against PCA of physical experiments. Finally, compressed data acquisition has been realized, allowing faster data acquisition for multi-frequency eddy current systems with hardware limitations, and is generalizable to other applications where real time acquisition of large data sets is prohibitive.
Resumo:
In order to determine the adequacy with which safety problems on low-volume rural roadways were addressed by the four states of Federal Region VII (Iowa, Kansas, Missouri, and Nebraska), a review was made of the states' safety policies. After reviewing literature dealing with the identification of hazardous locations, evaluation methodologies, and system-wide safety improvements, a survey of the states' safety policies was conducted. An official from each state was questioned about the various aspects and procedures dealing with safety improvements. After analyzing and comparing the remarkably diverse policies, recommendations were made in the form of a model safety program. This program included special modifications that would help remediate hazards on low-volume rural roadways. Especially encouraged is a system-wide approach to improvement which would cover all parts of the highway system, not just urban and high-volume roadways.
Resumo:
Children with Attention-Deficit/Hyperactivity Disorder (ADHD) are at increased risk for the development of depression and delinquent behavior. Children and adolescents with ADHD also experience difficulty creating/maintaining high quality friendships and parent-child relationships, and these difficulties may contribute to the development of co-morbid internalizing and externalizing symptoms in adolescence. However, there is limited research examining whether high quality friendships and parent-child relationships mediate the relation between ADHD and the emergence of these co-morbid symptoms at the transition to high school. This study examines the mediating role of relationship quality in the association between ADHD and depressive symptoms/delinquent behaviors at this developmentally significant transition point. Results revealed significant indirect effects of grade 6 attention problems on grade 9 depressive symptoms through friendship quality and quality of the mother-child relationship in grade 8. Interventions targeting parent and peer relationships may be valuable for youth with ADHD to promote successful transitions to high school.
Resumo:
Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
A class of domain decomposition preconditioners for hp-discontinuous Galerkin finite element methods
Resumo:
In this article we address the question of efficiently solving the algebraic linear system of equations arising from the discretization of a symmetric, elliptic boundary value problem using hp-version discontinuous Galerkin finite element methods. In particular, we introduce a class of domain decomposition preconditioners based on the Schwarz framework, and prove bounds on the condition number of the resulting iteration operators. Numerical results confirming the theoretical estimates are also presented.
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
Background: Indices predictive of central obesity include waist circumference (WC) and waist-to-height ratio (WHtR). The aims of this study were 1) to establish a Colombian youth smoothed centile charts and LMS tables for WC and WHtR and 2) to evaluate the utility of these parameters as predictors of overweight and obesity. Method: A cross-sectional study whose sample population comprised 7954 healthy Colombian schoolchildren [boys n=3460 and girls n=4494, mean (standard deviation) age 12.8 (2.3) years old]. Weight, height, body mass index (BMI), WC and WHtR and its percentiles were calculated. Appropriate cut-offs point of WC and WHtR for overweight and obesity, as defined by the International Obesity Task Force (IOTF) definitions, were selected using receiver operating characteristic (ROC) analysis. The discriminating power of WC and WHtR was expressed as area under the curve (AUC). Results: Reference values for WC and WHtR are presented. Mean WC increased and WHtR decreased with age for both genders. We found a moderate positive correlation between WC and BMI (r= 0.756, P < 0.01) and WHtR and BMI (r= 0.604, P < 0.01). The ROC analysis showed a high discrimination power in the identification of overweight and obesity for both measures in our sample population. Overall, WHtR was slightly a better predictor for overweight/obesity (AUC 95% CI 0.868-0.916) than the WC (AUC 95% CI 0.862-0.904). Conclusion: This paper presents the first sex- and age-specific WC and WHtR percentiles for both measures among Colombian children and adolescents aged 9–17.9 years. By providing LMS tables for Latin-American people based on Colombian reference data, we hope to provide quantitative tools for the study of obesity and its comorbidities.