108 resultados para Generalized Variational Inequality
Resumo:
European labour markets are increasingly divided between insiders in full-time permanent employment and outsiders in precarious work or unemployment. Using quantitative as well as qualitative methods, this thesis investigates the determinants and consequences of labour market policies that target these outsiders in three separate papers. The first paper looks at Active Labour Market Policies (ALMPs) that target the unemployed. It shows that left and right-wing parties choose different types of ALMPs depending on the policy and the welfare regime in which the party is located. These findings reconcile the conflicting theoretical expectations from the Power Resource approach and the insider-outsider theory. The second paper considers the regulation and protection of the temporary work sector. It solves the puzzle of temporary re-regulation in France, which contrasts with most other European countries that have deregulated temporary work. Permanent workers are adversely affected by the expansion of temporary work in France because of general skills and low wage coordination. The interests of temporary and permanent workers for re-regulation therefore overlap in France and left governments have an incentive to re-regulate the sector. The third paper then investigates what determines inequality between median and bottom income workers. It shows that non-inclusive economic coordination increases inequality in the absence of compensating institutions such as minimum wage regulation. The deregulation of temporary work as well as spending on employment incentives and rehabilitation also has adverse effects on inequality. Thus, policies that target outsiders have important economic effects on the rest of the workforce. Three broader contributions can be identified. First, welfare state policies may not always be in the interests of labour, so left parties may not always promote them. Second, the interests of insiders and outsiders are not necessarily at odds. Third, economic coordination may not be conducive to egalitarianism where it is not inclusive.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
The Ultra Weak Variational Formulation (UWVF) is a powerful numerical method for the approximation of acoustic, elastic and electromagnetic waves in the time-harmonic regime. The use of Trefftz-type basis functions incorporates the known wave-like behaviour of the solution in the discrete space, allowing large reductions in the required number of degrees of freedom for a given accuracy, when compared to standard finite element methods. However, the UWVF is not well disposed to the accurate approximation of singular sources in the interior of the computational domain. We propose an adjustment to the UWVF for seismic imaging applications, which we call the Source Extraction UWVF. Differing fields are solved for in subdomains around the source, and matched on the inter-domain boundaries. Numerical results are presented for a domain of constant wavenumber and for a domain of varying sound speed in a model used for seismic imaging.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
We consider a generic basic semi-algebraic subset S of the space of generalized functions, that is a set given by (not necessarily countably many) polynomial constraints. We derive necessary and sufficient conditions for an infinite sequence of generalized functions to be realizable on S, namely to be the moment sequence of a finite measure concentrated on S. Our approach combines the classical results about the moment problem on nuclear spaces with the techniques recently developed to treat the moment problem on basic semi-algebraic sets of Rd. In this way, we determine realizability conditions that can be more easily verified than the well-known Haviland type conditions. Our result completely characterizes the support of the realizing measure in terms of its moments. As concrete examples of semi-algebraic sets of generalized functions, we consider the set of all Radon measures and the set of all the measures having bounded Radon–Nikodym density w.r.t. the Lebesgue measure.
Resumo:
This letter presents an accurate delay analysis in prioritised wireless sensor networks (WSN). The analysis is an enhancement of the existing analysis proposed by Choobkar and Dilmaghani, which is only applicable to the case where the lower priority nodes always have packets to send in the empty slots of the higher priority node. The proposed analysis is applicable for any pattern of packet arrival, which includes the general case where the lower priority nodes may or may not have packets to send in the empty slots of the higher priority nodes. Evaluation of both analyses showed that the proposed delay analysis has better accuracy over the full range of loads and provides an excellent match to simulation results.
Resumo:
Latin America is known as the most unequal region in the world, where extreme displays of wealth and exposure to scarcity lay bare in the urban landscape. Inequality is not just a social issue; it has considerable impact on economic development. This is because social inequality generates instability and conflict, which can create unsettling conditions for investment. At the macro level, social inequality can also present barriers to economic development, as most government policies and resources tend to be directed in solving social conflict rather than to promote and generate growth. This is one of the reasons usually cited in explaining the development gap between Latin America and other emerging economies, take East Asia for example - they have similar policies to those applied recently in Latin America, but are achieving better growth. The other reason cited is institutional; this includes governance as well as property rights and enforcement of contracts. The latter is the focus of this chapter.
Resumo:
Accurate and reliable rain rate estimates are important for various hydrometeorological applications. Consequently, rain sensors of different types have been deployed in many regions. In this work, measurements from different instruments, namely, rain gauge, weather radar, and microwave link, are combined for the first time to estimate with greater accuracy the spatial distribution and intensity of rainfall. The objective is to retrieve the rain rate that is consistent with all these measurements while incorporating the uncertainty associated with the different sources of information. Assuming the problem is not strongly nonlinear, a variational approach is implemented and the Gauss–Newton method is used to minimize the cost function containing proper error estimates from all sensors. Furthermore, the method can be flexibly adapted to additional data sources. The proposed approach is tested using data from 14 rain gauges and 14 operational microwave links located in the Zürich area (Switzerland) to correct the prior rain rate provided by the operational radar rain product from the Swiss meteorological service (MeteoSwiss). A cross-validation approach demonstrates the improvement of rain rate estimates when assimilating rain gauge and microwave link information.
Discontinuous Galerkin methods for the p-biharmonic equation from a discrete variational perspective
Resumo:
We study discontinuous Galerkin approximations of the p-biharmonic equation for p∈(1,∞) from a variational perspective. We propose a discrete variational formulation of the problem based on an appropriate definition of a finite element Hessian and study convergence of the method (without rates) using a semicontinuity argument. We also present numerical experiments aimed at testing the robustness of the method.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
A generalization of Arakawa and Schubert's convective quasi-equilibrium principle is presented for a closure formulation of mass-flux convection parameterization. The original principle is based on the budget of the cloud work function. This principle is generalized by considering the budget for a vertical integral of an arbitrary convection-related quantity. The closure formulation includes Arakawa and Schubert's quasi-equilibrium, as well as both CAPE and moisture closures as special cases. The formulation also includes new possibilities for considering vertical integrals that are dependent on convective-scale variables, such as the moisture within convection. The generalized convective quasi-equilibrium is defined by a balance between large-scale forcing and convective response for a given vertically-integrated quantity. The latter takes the form of a convolution of a kernel matrix and a mass-flux spectrum, as in the original convective quasi-equilibrium. The kernel reduces to a scalar when either a bulk formulation is adopted, or only large-scale variables are considered within the vertical integral. Various physical implications of the generalized closure are discussed. These include the possibility that precipitation might be considered as a potentially-significant contribution to the large-scale forcing. Two dicta are proposed as guiding physical principles for the specifying a suitable vertically-integrated quantity.
Resumo:
This article reviews the shortcomings of the current UK planning system to address urban inequalities and segregation of impoverished communities.
Resumo:
An important challenge for conservation today is to understand the endangerment process and identify any generalized patterns in how threats occur and aggregate across taxa. Here we use a global database describing main current external threats in mammals to evaluate the prevalence of distinct threatening processes, primarily of anthropogenic origin, and to identify generalized drivers of extinction and their association with vulnerability status and intrinsic species' traits. We detect several primary threat combinations that are generally associated with distinct species. In particular, large and widely distributed mammals are affected by combinations of direct exploitation and threats associated with increasing landscape modification that go from logging to intense human land-use. Meanwhile, small, narrowly distributed species are affected by intensifying levels of landscape modification but are not directly exploited. In general more vulnerable species are affected by a greater number of threats, suggesting increased extinction risk is associated with the accumulation of external threats. Overall, our findings show that endangerment in mammals is strongly associated with increasing habitat loss and degradation caused by human land-use intensification. For large and widely distributed mammals there is the additional risk of being hunted.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.