917 resultados para Quasi-Sure Convergence
Resumo:
The purpose of this paper is to analyze the quasi-elastic deformational behavior that has been induced by groundwater withdrawal of the Tertiary detrital aquifer of Madrid (Spain). The spatial and temporal evolution of ground surface displacement was estimated by processing two datasets of radar satellite images (SAR) using Persistent Scatterer Interferometry (PSI). The first SAR dataset was acquired between April 1992 and November 2000 by ERS-1 and ERS-2 satellites, and the second one by the ENVISAT satellite between August 2002 and September 2010. The spatial distribution of PSI measurements reveals that the magnitude of the displacement increases gradually towards the center of the well field area, where approximately 80 mm of maximum cumulated displacement is registered. The correlation analysis made between displacement and piezometric time series provides a correlation coefficient greater than 85% for all the wells. The elastic and inelastic components of measured displacements were separated, observing that the elastic component is, on average, more than 4 times the inelastic component for the studied period. Moreover, the hysteresis loops on the stress–strain plots indicate that the response is in the elastic range. These results demonstrate the quasi-elastic behavior of the aquifer. During the aquifer recovery phase ground surface uplift almost recovers from the subsidence experienced during the preceding extraction phase. Taking into account this unique aquifer system, a one dimensional elastic model was calibrated in the period 1997–2000. Subsequently, the model was used to predict the ground surface movements during the period 1992–2010. Modeled displacements were validated with PSI displacement measurements, exhibiting an error of 13% on average, related with the inelastic component of deformation occurring as a long-term trend in low permeability fine-grained units. This result further demonstrates the quasi-elastic deformational behavior of this unique aquifer system.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
In recent times the Douglas–Rachford algorithm has been observed empirically to solve a variety of nonconvex feasibility problems including those of a combinatorial nature. For many of these problems current theory is not sufficient to explain this observed success and is mainly concerned with questions of local convergence. In this paper we analyze global behavior of the method for finding a point in the intersection of a half-space and a potentially non-convex set which is assumed to satisfy a well-quasi-ordering property or a property weaker than compactness. In particular, the special case in which the second set is finite is covered by our framework and provides a prototypical setting for combinatorial optimization problems.
Resumo:
Population balances of polymer species in terms 'of discrete transforms with respect to counts of groups lead to tractable first order partial differential equations when ali rate constants are independent of chain length and loop formation is negligible [l]. Average molecular weights in the absence ofgelation are long known to be readily found through integration of an initial value problem. The extension to size distribution prediction is also feasible, but its performance is often lower to the one provided by methods based upon real chain length domain [2]. Moreover, the absence ofagood starting procedure and a higher numerical sensitivity hás decisively impaired its application to non-linear reversibly deactivated polymerizations, namely NMRP [3].
Resumo:
Tese de doutoramento, Linguística (Linguística Educacional), Universidade de Lisboa, Faculdade de Letras, 2016
Resumo:
An ambitious, comprehensive and high-standard trade and investment agreement between the European Union and the United States is feasible, but a key concern is whether the transatlantic trade partners will succeed in creating a meaningful agreement within the tight timeline of the Transatlantic Trade and Investment Partnership (TTIP) negotiations. The target of a ratified pact before a new European Commission takes office in November 2014 is an objective that is likely to conflict with the level of ambition on the substance. Regulatory congruence would require the unilateral and unconditional recognition by the TTIP partners of each other’s standards, procedures and conformity assessment tests. The way forward is to create a ‘living’ (or progressive commitment) agreement on regulatory cooperation with a horizontal template for coherence and conformity assessment and a detailed monitoring mechanism, with implementation starting immediately for a few selected sectors. Regulatory harmonisation under TTIP may not lead to emerging markets automatically upgrading to the higher TTIP standards. Domestic priorities and the high demand from a rising price-sensitive group of consumers will likely result in a dual regulatory regime in emerging markets in the medium-term.
Resumo:
In this Working Paper, based on nearly 20 papers produced by the Centre for European Policy Studies, Slovak Governance Institute and the Conference Board Europe, we examine whether the current trends in the areas of education and skills are pushing the European Union, towards convergence or polarisation. We cover a wide range of questions related to this main issue. No easy answers, but several cross-cutting messages emerged from the research. We demonstrated that there is increasing complexity in what a ‘low-skilled’ person is and how well (or poorly) s/he fares in the labour market. There are undoubtedly powerful forces pushing for more polarisation, particularly in the labour market. Our research confirmed that early childhood education plays an important role, and it also appears to be increasingly uncontested as a policy prescription. However, the other frequently emphasised remedy to inequality – less selection in secondary education, particularly later division of children into separate tracks – is more problematic. Its effectiveness depends on the country in question and the target group, while education systems are extremely difficult to shift even on a long-term basis. A different, more-nuanced type of warning to policy-makers is delivered in our research on returns to higher education by field of study, which showed hidden rationality in how students choose their major.