114 resultados para Monarchical Schemes
Resumo:
An increasing number of studies have sprung up in recent years seeking to identify individual inventors from patent data. Different heuristics have been suggested to use their names and other information disclosed in patent documents in order to find out “who is who” in patents. This paper contributes to this literature by setting forth a methodology to identify them using patents applied to the European Patent Office (EPO hereafter). As in the large part of this literature, we basically follow a three-steps procedure: (1) the parsing stage, aimed at reducing the noise in the inventor’s name and other fields of the patent; (2) the matching stage, where name matching algorithms are used to group possible similar names; (3) the filtering stage, where additional information and different scoring schemes are used to filter out these potential same inventors. The paper includes some figures resulting of applying the algorithms to the set of European inventors applying to the EPO for a large period of time.
Resumo:
The effectiveness of R&D subsidies can vary substantially depending on their characteristics. Specifically, the amount and intensity of such subsidies are crucial issues in the design of public schemes supporting private R&D. Public agencies determine the intensities of R&D subsidies for firms in line with their eligibility criteria, although assessing the effects of R&D projects accurately is far from straightforward. The main aim of this paper is to examine whether there is an optimal intensity for R&D subsidies through an analysis of their impact on private R&D effort. We examine the decisions of a public agency to grant subsidies taking into account not only the characteristics of the firms but also, as few previous studies have done to date, those of the R&D projects. In determining the optimal subsidy we use both parametric and nonparametric techniques. The results show a non-linear relationship between the percentage of subsidy received and the firms’ R&D effort. These results have implications for technology policy, particularly for the design of R&D subsidies that ensure enhanced effectiveness.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
During the recent period of economic crisis, many countries have introduced scrappage schemes to boost the sale and production of vehicles, particularly of vehicles designed to pollute less. In this paper, we analyze the impact of a particular scheme in Spain (Plan2000E) on vehicle prices and sales figures as well as on the reduction of polluting emissions from vehicles on the road. We considered the introduction of this scheme an exogenous policy change and because we could distinguish a control group (non-subsidized vehicles) and a treatment group (subsidized vehicles), before and after the introduction of the Plan, we were able to carry out our analysis as a quasi-natural experiment. Our study reveals that manufacturers increased vehicle prices by the same amount they were granted through the Plan (1,000 â¬). In terms of sales, econometric estimations revealed an increase of almost 5% as a result of the implementation of the Plan. With regard to environmental efficiency, we compared the costs (inverted quantity of money) and the benefits of the program (reductions in polluting emissions and additional fiscal revenues) and found that the Plan would only be beneficial if it boosted demand by at least 30%.
Resumo:
The Great Tohoku-Kanto earthquake and resulting tsunami has brought considerable attention to the issue of the construction of new power plants. We argue in this paper, nuclear power is not a sustainable solution to energy problems. First, we explore the stock of uranium-235 and the different schemes developed by the nuclear power industry to exploit this resource. Second, we show that these methods, fast breeder and MOX fuel reactors, are not feasible. Third, we show that the argument that nuclear energy can be used to reduce CO2 emissions is false: the emissions from the increased water evaporation from nuclear power generation must be accounted for. In the case of Japan, water from nuclear power plants is drained into the surrounding sea, raising the water temperature which has an adverse affect on the immediate ecosystem, as well as increasing CO2 emissions from increased water evaporation from the sea. Next, a short exercise is used to show that nuclear power is not even needed to meet consumer demand in Japan. Such an exercise should be performed for any country considering the construction of additional nuclear power plants. Lastly, the paper is concluded with a discussion of the implications of our findings.
Resumo:
Land cover classification is a key research field in remote sensing and land change science as thematic maps derived from remotely sensed data have become the basis for analyzing many socio-ecological issues. However, land cover classification remains a difficult task and it is especially challenging in heterogeneous tropical landscapes where nonetheless such maps are of great importance. The present study aims to establish an efficient classification approach to accurately map all broad land cover classes in a large, heterogeneous tropical area of Bolivia, as a basis for further studies (e.g., land cover-land use change). Specifically, we compare the performance of parametric (maximum likelihood), non-parametric (k-nearest neighbour and four different support vector machines - SVM), and hybrid classifiers, using both hard and soft (fuzzy) accuracy assessments. In addition, we test whether the inclusion of a textural index (homogeneity) in the classifications improves their performance. We classified Landsat imagery for two dates corresponding to dry and wet seasons and found that non-parametric, and particularly SVM classifiers, outperformed both parametric and hybrid classifiers. We also found that the use of the homogeneity index along with reflectance bands significantly increased the overall accuracy of all the classifications, but particularly of SVM algorithms. We observed that improvements in producer’s and user’s accuracies through the inclusion of the homogeneity index were different depending on land cover classes. Earlygrowth/degraded forests, pastures, grasslands and savanna were the classes most improved, especially with the SVM radial basis function and SVM sigmoid classifiers, though with both classifiers all land cover classes were mapped with producer’s and user’s accuracies of around 90%. Our approach seems very well suited to accurately map land cover in tropical regions, thus having the potential to contribute to conservation initiatives, climate change mitigation schemes such as REDD+, and rural development policies.
Resumo:
Aquesta memòria vol mostrar que la tecnologia XML és la millor alternativa per a afrontar el repte tecnològic existent en els sistemes d'extracció d'informació de les aplicacions de nova generació. Aquests sistemes, d'una banda, han de garantir la seva independència respecte dels esquemes de les bases de dades dels quals s'alimenten i, de l'altra, han de ser capaços de mostrar la informació en múltiples formats.
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection
Resumo:
This paper presents a study of connection availability in GMPLS over optical transport networks (OTN) taking into account different network topologies. Two basic path protection schemes are considered and compared with the no protection case. The selected topologies are heterogeneous in geographic coverage, network diameter, link lengths, and average node degree. Connection availability is also computed considering the reliability data of physical components and a well-known network availability model. Results show several correspondences between suitable path protection algorithms and several network topology characteristics
Resumo:
In this paper, different recovery methods applied at different network layers and time scales are used in order to enhance the network reliability. Each layer deploys its own fault management methods. However, current recovery methods are applied to only a specific layer. New protection schemes, based on the proposed partial disjoint path algorithm, are defined in order to avoid protection duplications in a multi-layer scenario. The new protection schemes also encompass shared segment backup computation and shared risk link group identification. A complete set of experiments proves the efficiency of the proposed methods in relation with previous ones, in terms of resources used to protect the network, the failure recovery time and the request rejection ratio
Resumo:
This paper focuses on QoS routing with protection in an MPLS network over an optical layer. In this multi-layer scenario each layer deploys its own fault management methods. A partially protected optical layer is proposed and the rest of the network is protected at the MPLS layer. New protection schemes that avoid protection duplications are proposed. Moreover, this paper also introduces a new traffic classification based on the level of reliability. The failure impact is evaluated in terms of recovery time depending on the traffic class. The proposed schemes also include a novel variation of minimum interference routing and shared segment backup computation. A complete set of experiments proves that the proposed schemes are more efficient as compared to the previous ones, in terms of resources used to protect the network, failure impact and the request rejection ratio
Resumo:
A condition needed for testing nested hypotheses from a Bayesianviewpoint is that the prior for the alternative model concentratesmass around the small, or null, model. For testing independencein contingency tables, the intrinsic priors satisfy this requirement.Further, the degree of concentration of the priors is controlled bya discrete parameter m, the training sample size, which plays animportant role in the resulting answer regardless of the samplesize.In this paper we study robustness of the tests of independencein contingency tables with respect to the intrinsic priors withdifferent degree of concentration around the null, and comparewith other “robust” results by Good and Crook. Consistency ofthe intrinsic Bayesian tests is established.We also discuss conditioning issues and sampling schemes,and argue that conditioning should be on either one margin orthe table total, but not on both margins.Examples using real are simulated data are given
Resumo:
The aim of this paper is to construct a "super" version of a tensor triangulated category, and to show that super-schemes can be reconstructed from its category of perfect complexes in a way similar to Balmer [Bal05] provided we consider this extra structure.
Resumo:
Los sistemas de radio cognitivos son una solución a la deficiente distribución del espectro inalámbrico de frecuencias. Usando acceso dinámico al medio, los usuarios secundarios pueden comunicarse en canales de frecuencia disponibles, mientras los usuarios asignados no están usando dichos canales. Un buen sistema de mensajería de control es necesario para que los usuarios secundarios no interfieran con los usuarios primarios en las redes de radio cognitivas. Para redes en donde los usuarios son heterogéneos en frecuencia, es decir, no poseen los mismos canales de frecuencia para comunicarse, el grupo de canales utilizado para transmitir información de control debe elegirse cuidadosamente. Por esta razón, en esta tesis se estudian las ideas básicas de los esquemas de mensajería de control usados en las redes de radio cognitivas y se presenta un esquema adecuado para un control adecuado para usuarios heterogéneos en canales de frecuencia. Para ello, primero se presenta una nueva taxonomía para clasificar las estrategias de mensajería de control, identificando las principales características que debe cumplir un esquema de control para sistemas heterogéneos en frecuencia. Luego, se revisan diversas técnicas matemáticas para escoger el mínimo número de canales por los cuales se transmite la información de control. Después, se introduce un modelo de un esquema de mensajería de control que use el mínimo número de canales y que utilice las características de los sistemas heterogéneos en frecuencia. Por último, se comparan diversos esquemas de mensajería de control en términos de la eficiencia de transmisión.
Resumo:
Los sistemas de radio cognitivos son una solución a la deficiente distribución del espectro inalámbrico de frecuencias. Usando acceso dinámico al medio, los usuarios secundarios pueden comunicarse en canales de frecuencia disponibles, mientras los usuarios asignados no están usando dichos canales. Un buen sistema de mensajería de control es necesario para que los usuarios secundarios no interfieran con los usuarios primarios en las redes de radio cognitivas. Para redes en donde los usuarios son heterogéneos en frecuencia, es decir, no poseen los mismos canales de frecuencia para comunicarse, el grupo de canales utilizado para transmitir información de control debe elegirse cuidadosamente. Por esta razón, en esta tesis se estudian las ideas básicas de los esquemas de mensajería de control usados en las redes de radio cognitivas y se presenta un esquema adecuado para un control adecuado para usuarios heterogéneos en canales de frecuencia. Para ello, primero se presenta una nueva taxonomía para clasificar las estrategias de mensajería de control, identificando las principales características que debe cumplir un esquema de control para sistemas heterogéneos en frecuencia. Luego, se revisan diversas técnicas matemáticas para escoger el mínimo número de canales por los cuales se transmite la información de control. Después, se introduce un modelo de un esquema de mensajería de control que use el mínimo número de canales y que utilice las características de los sistemas heterogéneos en frecuencia. Por último, se comparan diversos esquemas de mensajería de control en términos de la eficiencia de transmisión.