887 resultados para compression set
Resumo:
We study the topology of a set naturally arising from the study of β-expansions. After proving several elementary results for this set we study the case when our base is Pisot. In this case we give necessary and sufficient conditions for this set to be finite. This finiteness property will allow us to generalise a theorem due to Schmidt and will provide the motivation for sufficient conditions under which the growth rate and Hausdorff dimension of the set of β-expansions are equal and explicitly calculable.
Resumo:
Let λ1,…,λn be real numbers in (0,1) and p1,…,pn be points in Rd. Consider the collection of maps fj:Rd→Rd given by fj(x)=λjx+(1−λj)pj. It is a well known result that there exists a unique nonempty compact set Λ⊂Rd satisfying Λ=∪nj=1fj(Λ). Each x∈Λ has at least one coding, that is a sequence (ϵi)∞i=1 ∈{1,…,n}N that satisfies limN→∞fϵ1…fϵN(0)=x. We study the size and complexity of the set of codings of a generic x∈Λ when Λ has positive Lebesgue measure. In particular, we show that under certain natural conditions almost every x∈Λ has a continuum of codings. We also show that almost every x∈Λ has a universal coding. Our work makes no assumptions on the existence of holes in Λ and improves upon existing results when it is assumed Λ contains no holes.
Resumo:
The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.
Resumo:
Purpose – The purpose of this paper is to introduce the debate forum on internationalization motives of this special issue of Multinational Business Review. Design/methodology/approach – The authors reflect on the background and evolution of the internationalization motives over the past few decades, and then provide suggestions for how to use the motives for future analyses. The authors also reflect on the contributions to the debate of the accompanying articles of the forum. Findings – There continue to be new developments in the way in which firms organize themselves as multinational enterprises (MNEs), and this implies that the “classic” motives originally introduced by Dunning in 1993 need to be revisited. Dunning’s motives and arguments were deductive and atheoretical, and these were intended to be used as a toolkit, used in conjunction with other theories and frameworks. They are not an alternative to a classification of possible MNE strategies. Originality/value – This paper and the ones that accompany it, provide a deeper and nuanced understanding on internationalization motives for future research to build on.
Resumo:
Understanding the relationships between trait diversity, species diversity and ecosystem functioning is essential for sustainable management. For functions comprising two trophic levels, trait matching between interacting partners should also drive functioning. However, the predictive ability of trait diversity and matching is unclear for most functions, particularly for crop pollination, where interacting partners did not necessarily co-evolve. World-wide, we collected data on traits of flower visitors and crops, visitation rates to crop flowers per insect species and fruit set in 469 fields of 33 crop systems. Through hierarchical mixed-effects models, we tested whether flower visitor trait diversity and/or trait matching between flower visitors and crops improve the prediction of crop fruit set (functioning) beyond flower visitor species diversity and abundance. Flower visitor trait diversity was positively related to fruit set, but surprisingly did not explain more variation than flower visitor species diversity. The best prediction of fruit set was obtained by matching traits of flower visitors (body size and mouthpart length) and crops (nectar accessibility of flowers) in addition to flower visitor abundance, species richness and species evenness. Fruit set increased with species richness, and more so in assemblages with high evenness, indicating that additional species of flower visitors contribute more to crop pollination when species abundances are similar. Synthesis and applications. Despite contrasting floral traits for crops world-wide, only the abundance of a few pollinator species is commonly managed for greater yield. Our results suggest that the identification and enhancement of pollinator species with traits matching those of the focal crop, as well as the enhancement of pollinator richness and evenness, will increase crop yield beyond current practices. Furthermore, we show that field practitioners can predict and manage agroecosystems for pollination services based on knowledge of just a few traits that are known for a wide range of flower visitor species.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
A comprehensive atmospheric boundary layer (ABL) data set was collected in eight fi eld experiments (two during each season) over open water and sea ice in the Baltic Sea during 1998–2001 with the primary objective to validate the coupled atmospheric- ice-ocean-land surface model BALTIMOS (BALTEX Integrated Model System). Measurements were taken by aircraft, ships and surface stations and cover the mean and turbulent structure of the ABL including turbulent fl uxes, radiation fl uxes, and cloud conditions. Measurement examples of the spatial variability of the ABL over the ice edge zone and of the stable ABL over open water demonstrate the wide range of ABL conditions collected and the strength of the data set which can also be used to validate other regional models.
Resumo:
Creating non-word lists is a necessary but time consuming exercise often needed when conducting behavioural language tasks involving lexical decision-making or non-word reading. The following article describes the process whereby we created a list of 226 non-words matching 226 of the Snodgrass picture set (Snodgrass & Vanderwart, 1980).The non-words were matched for number of syllables, stress pattern, number of phonemes, bigram count and presence and location of the target sound when relevant.
Resumo:
Analysis of experimental interlocking blocks of concrete with addition of residues of process the tires retreading production. With the population growth in recent years, industry in general has adjusted itself to resulting demand. the industry of tire retreading generates residues that have been discarded without any control. this adds to environmental pollution and promotes the proliferation of vectors harmful to health, aiming to find an application for this type of residues, this study presents experimental results to interlocking concrete block pavements, with addition of residues tires, interlocking blocks were built up and we determined, through laboratory tests, the need to set the mark that provide greater return regarding analyzed characteristics, there are four types of dosage of concrete with residues tires. We accomplished tests of compression strength, water absorption and resistance to impact. Through the preliminary results, we verified that are satisfactory, confirming the possibility of applying this type of interlocking block in environments with low demand, which would bring the economy of natural sources of aggregates, beyond ecological benefits through the reuse of residues from retreading of tires.
Resumo:
Clustering is a difficult task: there is no single cluster definition and the data can have more than one underlying structure. Pareto-based multi-objective genetic algorithms (e.g., MOCK Multi-Objective Clustering with automatic K-determination and MOCLE-Multi-Objective Clustering Ensemble) were proposed to tackle these problems. However, the output of such algorithms can often contains a high number of partitions, becoming difficult for an expert to manually analyze all of them. In order to deal with this problem, we present two selection strategies, which are based on the corrected Rand, to choose a subset of solutions. To test them, they are applied to the set of solutions produced by MOCK and MOCLE in the context of several datasets. The study was also extended to select a reduced set of partitions from the initial population of MOCLE. These analysis show that both versions of selection strategy proposed are very effective. They can significantly reduce the number of solutions and, at the same time, keep the quality and the diversity of the partitions in the original set of solutions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
This paper deals with semi-global C(k)-solvability of complex vector fields of the form L = partial derivative/partial derivative t + x(r) (a(x) + ib(x))partial derivative/partial derivative x, r >= 1, defined on Omega(epsilon) = (-epsilon, epsilon) x S(1), epsilon > 0, where a and b are C(infinity) real-valued functions in (-epsilon, epsilon). It is shown that the interplay between the order of vanishing of the functions a and b at x = 0 influences the C(k)-solvability at Sigma = {0} x S(1). When r = 1, it is permitted that the functions a and b of L depend on the x and t variables, that is, L = partial derivative/partial derivative t + x(a(x, t) + ib(x, t))partial derivative/partial derivative x, where (x, t) is an element of Omega(epsilon).
Resumo:
We study the Gevrey solvability of a class of complex vector fields, defined on Omega(epsilon) = (-epsilon, epsilon) x S(1), given by L = partial derivative/partial derivative t + (a(x) + ib(x))partial derivative/partial derivative x, b not equivalent to 0, near the characteristic set Sigma = {0} x S(1). We show that the interplay between the order of vanishing of the functions a and b at x = 0 plays a role in the Gevrey solvability. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.