85 resultados para grid point selection
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The use of two-dimensional spectral analysis applied to terrain heights in order to determine characteristic terrain spatial scales and its subsequent use for the objective definition of an adequate grid size required to resolve terrain forcing are presented in this paper. In order to illustrate the influence of grid size, atmospheric flow in a complex terrain area of the Spanish east coast is simulated by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical model using different horizontal grid resolutions. In this area, a grid size of 2 km is required to account for 95% of terrain variance. Comparison among results of the different simulations shows that, although the main wind behavior does not change dramatically, some small-scale features appear when using a resolution of 2 km or finer. Horizontal flow pattern differences are significant both in the nighttime, when terrain forcing is more relevant, and in the daytime, when thermal forcing is dominant. Vertical structures also are investigated, and results show that vertical advection is influenced highly by the horizontal grid size during the daytime period. The turbulent kinetic energy and potential temperature vertical cross sections show substantial differences in the structure of the planetary boundary layer for each model configuration
Resumo:
The present paper is aimed at identifying what are the effects of the Point System of Selection of immigrants in Quebec. I defend that the distribution of points results in a different composition of immigrant stocks in terms of origin mix and not in terms of labour skills. To do so, I carry out a longitudinal descriptive analysis on the national composition of immigrants in Quebec and two other significant provinces (Ontario and British Columbia), as well as an analysis of the distribution of points in Quebec and in the rest of Canada.
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
Poor understanding of the spliceosomal mechanisms to select intronic 3' ends (3'ss) is a major obstacle to deciphering eukaryotic genomes. Here, we discern the rules for global 3'ss selection in yeast. We show that, in contrast to the uniformity of yeast splicing, the spliceosome uses all available 3'ss within a distance window from the intronic branch site (BS), and that in 70% of all possible 3'ss this is likely to be mediated by pre-mRNA structures. Our results reveal that one of these RNA folds acts as an RNA thermosensor, modulating alternative splicing in response to heat shock by controlling alternate 3'ss availability. Thus, our data point to a deeper role for the pre-mRNA in the control of its own fate, and to a simple mechanism for some alternative splicing.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Peer-reviewed
Resumo:
Markowitz portfolio theory (1952) has induced research into the efficiency of portfolio management. This paper studies existing nonparametric efficiency measurement approaches for single period portfolio selection from a theoretical perspective and generalises currently used efficiency measures into the full mean-variance space. Therefore, we introduce the efficiency improvement possibility function (a variation on the shortage function), study its axiomatic properties in the context of Markowitz efficient frontier, and establish a link to the indirect mean-variance utility function. This framework allows distinguishing between portfolio efficiency and allocative efficiency. Furthermore, it permits retrieving information about the revealed risk aversion of investors. The efficiency improvement possibility function thus provides a more general framework for gauging the efficiency of portfolio management using nonparametric frontier envelopment methods based on quadratic optimisation.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.
Resumo:
We study whether selection affects motivation. In our experiment subjects first answer a personality questionnaire. They then play a 3-person game. One of the three players decides between an outside option assigning him a positive amount, but leaving the two others empty-handed and allowing one of the other two players to distribute a pie. Treatments differ in the procedure by which distributive power is assigned: to a randomly determined or to a knowingly selected partner. Before making her decision the selecting player could consult the personality questionnaire of the other two players. Results show that knowingly selected players keep less for themselves than randomly selected ones and reward the selecting player more generously.
Resumo:
This paper studies collective choice rules whose outcomes consist of a collection of simultaneous decisions, each one of which is the only concern of some group of individuals in society. The need for such rules arises in different contexts, including the establishment of jurisdictions, the location of multiple public facilities, or the election of representative committees. We define a notion of allocation consistency requiring that each partial aspect of the global decision taken by society as a whole should be ratified by the group of agents who are directly concerned with this particular aspect. We investigate the possibility of designing envy-free allocation consistent rules, we also explore whether such rules may also respect the Condorcet criterion.
Resumo:
Regular stair climbing has well-documented health dividends, such as increased fitness and strength, weight loss and reduced body fat, improved lipid profiles and reduced risk of osteoporosis. The general absence of barriers to participation makes stair climbing an ideal physical activity (PA) for health promotion. Studies in the US and the UK have consistently shown that interventions to increase the accumulation of lifestyle PA by climbing stairs rather than using the escalators are effective. However, there are no previous in Catalonia. This project tested one message for their ability to prompt travelers on the Montjuïc site to choose the stairs rather than the escalator when climbing up the Monjuïc hill. One standard message, " Take the stairs! 7 minutes of stair climbing a day protects your heart" provided a comparison with previous research done in the UK. Translated into Catalan and Spanish, it was presented on a poster positioned at the point of choice between the stairs and the escalator. The study used a quasi-experimental, interrupted time series design. Travelers, during several and specific hours on two days of the week, were coded for stair or escalator use, gender, age, ethnic status, presence of accompanying children or bags by one observer. Overall, the intervention resulted in a 81% increase in stair climbing. In the follow-up period without messages, stair climbing dropped out to baseline levels. This preliminary study showed a significant effect on stair use. However, caution is needed since results are based on a small sample and, only a low percentage of the sample took the stairs at baseline or the intervention phase . Future research on stair use in Catalonia should focus on using bigger samples, different sites (metro stations, airports, shopping centers, etc) , different messages and techniques to promote stair climbing.