113 resultados para Spline Subdivision Schemes
Resumo:
In this paper we describe an open learning object repository on Statistics based on DSpace which contains true learning objects, that is, exercises, equations, data sets, etc. This repository is part of a large project intended to promote the use of learning object repositories as part of the learning process in virtual learning environments. This involves the creation of a new user interface that provides users with additional services such as resource rating, commenting and so. Both aspects make traditional metadata schemes such as Dublin Core to be inadequate, as there are resources with no title or author, for instance, as those fields are not used by learners to browse and search for learning resources in the repository. Therefore, exporting OAI-PMH compliant records using OAI-DC is not possible, thus limiting the visibility of the learning objects in the repository outside the institution. We propose an architecture based on ontologies and the use of extended metadata records for both storing and refactoring such descriptions.
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Resumo:
We introduce and analyze two new semi-discrete numerical methods for the multi-dimensional Vlasov-Poisson system. The schemes are constructed by combing a discontinuous Galerkin approximation to the Vlasov equation together with a mixed finite element method for the Poisson problem. We show optimal error estimates in the case of smooth compactly supported initial data. We propose a scheme that preserves the total energy of the system.
Resumo:
We consider linear optimization over a nonempty convex semi-algebraic feasible region F. Semidefinite programming is an example. If F is compact, then for almost every linear objective there is a unique optimal solution, lying on a unique \active" manifold, around which F is \partly smooth", and the second-order sufficient conditions hold. Perturbing the objective results in smooth variation of the optimal solution. The active manifold consists, locally, of these perturbed optimal solutions; it is independent of the representation of F, and is eventually identified by a variety of iterative algorithms such as proximal and projected gradient schemes. These results extend to unbounded sets F.
Resumo:
La solución a los problemas de disponibilidad horaria para la realización de sesiones prácticas por parte de los estudiantes se encuentra en los laboratorios remotos, que permiten a estos interactuar con los elementos instalados en los laboratorios sin necesidad de estar presentes físicamente. Este proyecto pretende crear un laboratorio remoto para la asignatura “Robótica y Automatización Industrial” impartida en la ETSE, UAB, en el cual los estudiantes puedan ejecutar trayectorias de tipo spline cúbico en un brazo robot y observar a través de vídeo en tiempo real los movimientos del robot desde cualquier lugar con conexión a Internet.
Resumo:
An increasing number of studies have sprung up in recent years seeking to identify individual inventors from patent data. Different heuristics have been suggested to use their names and other information disclosed in patent documents in order to find out “who is who” in patents. This paper contributes to this literature by setting forth a methodology to identify them using patents applied to the European Patent Office (EPO hereafter). As in the large part of this literature, we basically follow a three-steps procedure: (1) the parsing stage, aimed at reducing the noise in the inventor’s name and other fields of the patent; (2) the matching stage, where name matching algorithms are used to group possible similar names; (3) the filtering stage, where additional information and different scoring schemes are used to filter out these potential same inventors. The paper includes some figures resulting of applying the algorithms to the set of European inventors applying to the EPO for a large period of time.
Resumo:
The effectiveness of R&D subsidies can vary substantially depending on their characteristics. Specifically, the amount and intensity of such subsidies are crucial issues in the design of public schemes supporting private R&D. Public agencies determine the intensities of R&D subsidies for firms in line with their eligibility criteria, although assessing the effects of R&D projects accurately is far from straightforward. The main aim of this paper is to examine whether there is an optimal intensity for R&D subsidies through an analysis of their impact on private R&D effort. We examine the decisions of a public agency to grant subsidies taking into account not only the characteristics of the firms but also, as few previous studies have done to date, those of the R&D projects. In determining the optimal subsidy we use both parametric and nonparametric techniques. The results show a non-linear relationship between the percentage of subsidy received and the firms’ R&D effort. These results have implications for technology policy, particularly for the design of R&D subsidies that ensure enhanced effectiveness.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
During the recent period of economic crisis, many countries have introduced scrappage schemes to boost the sale and production of vehicles, particularly of vehicles designed to pollute less. In this paper, we analyze the impact of a particular scheme in Spain (Plan2000E) on vehicle prices and sales figures as well as on the reduction of polluting emissions from vehicles on the road. We considered the introduction of this scheme an exogenous policy change and because we could distinguish a control group (non-subsidized vehicles) and a treatment group (subsidized vehicles), before and after the introduction of the Plan, we were able to carry out our analysis as a quasi-natural experiment. Our study reveals that manufacturers increased vehicle prices by the same amount they were granted through the Plan (1,000 â¬). In terms of sales, econometric estimations revealed an increase of almost 5% as a result of the implementation of the Plan. With regard to environmental efficiency, we compared the costs (inverted quantity of money) and the benefits of the program (reductions in polluting emissions and additional fiscal revenues) and found that the Plan would only be beneficial if it boosted demand by at least 30%.
Resumo:
The Great Tohoku-Kanto earthquake and resulting tsunami has brought considerable attention to the issue of the construction of new power plants. We argue in this paper, nuclear power is not a sustainable solution to energy problems. First, we explore the stock of uranium-235 and the different schemes developed by the nuclear power industry to exploit this resource. Second, we show that these methods, fast breeder and MOX fuel reactors, are not feasible. Third, we show that the argument that nuclear energy can be used to reduce CO2 emissions is false: the emissions from the increased water evaporation from nuclear power generation must be accounted for. In the case of Japan, water from nuclear power plants is drained into the surrounding sea, raising the water temperature which has an adverse affect on the immediate ecosystem, as well as increasing CO2 emissions from increased water evaporation from the sea. Next, a short exercise is used to show that nuclear power is not even needed to meet consumer demand in Japan. Such an exercise should be performed for any country considering the construction of additional nuclear power plants. Lastly, the paper is concluded with a discussion of the implications of our findings.
Resumo:
Land cover classification is a key research field in remote sensing and land change science as thematic maps derived from remotely sensed data have become the basis for analyzing many socio-ecological issues. However, land cover classification remains a difficult task and it is especially challenging in heterogeneous tropical landscapes where nonetheless such maps are of great importance. The present study aims to establish an efficient classification approach to accurately map all broad land cover classes in a large, heterogeneous tropical area of Bolivia, as a basis for further studies (e.g., land cover-land use change). Specifically, we compare the performance of parametric (maximum likelihood), non-parametric (k-nearest neighbour and four different support vector machines - SVM), and hybrid classifiers, using both hard and soft (fuzzy) accuracy assessments. In addition, we test whether the inclusion of a textural index (homogeneity) in the classifications improves their performance. We classified Landsat imagery for two dates corresponding to dry and wet seasons and found that non-parametric, and particularly SVM classifiers, outperformed both parametric and hybrid classifiers. We also found that the use of the homogeneity index along with reflectance bands significantly increased the overall accuracy of all the classifications, but particularly of SVM algorithms. We observed that improvements in producer’s and user’s accuracies through the inclusion of the homogeneity index were different depending on land cover classes. Earlygrowth/degraded forests, pastures, grasslands and savanna were the classes most improved, especially with the SVM radial basis function and SVM sigmoid classifiers, though with both classifiers all land cover classes were mapped with producer’s and user’s accuracies of around 90%. Our approach seems very well suited to accurately map land cover in tropical regions, thus having the potential to contribute to conservation initiatives, climate change mitigation schemes such as REDD+, and rural development policies.
Resumo:
Aquesta memòria vol mostrar que la tecnologia XML és la millor alternativa per a afrontar el repte tecnològic existent en els sistemes d'extracció d'informació de les aplicacions de nova generació. Aquests sistemes, d'una banda, han de garantir la seva independència respecte dels esquemes de les bases de dades dels quals s'alimenten i, de l'altra, han de ser capaços de mostrar la informació en múltiples formats.
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to goon or to stop sampling a signal. When the sampled values are homogeneous enough, we assume thatthey represent the signal fairly well and we do not need further refinement, otherwise more samples arerequired, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is verysensitive to variability is necessary. In this paper, we present a family of discrimination measures, thef-divergences, meeting this requirement. These convex functions have been well studied and successfullyapplied to image processing and several areas of engineering. Two applications to global illuminationare shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. Weobtain significantly better results than with classic criteria, showing that f-divergences are worth furtherinvestigation in computer graphics. Also a discrimination measure based on entropy of the samples forrefinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a naturalmethod to deal with the adaptive subdivision of the sampling region
Resumo:
This paper presents a study of connection availability in GMPLS over optical transport networks (OTN) taking into account different network topologies. Two basic path protection schemes are considered and compared with the no protection case. The selected topologies are heterogeneous in geographic coverage, network diameter, link lengths, and average node degree. Connection availability is also computed considering the reliability data of physical components and a well-known network availability model. Results show several correspondences between suitable path protection algorithms and several network topology characteristics