53 resultados para Carleson measures
Resumo:
Hypergraph width measures are a class of hypergraph invariants important in studying the complexity of constraint satisfaction problems (CSPs). We present a general exact exponential algorithm for a large variety of these measures. A connection between these and tree decompositions is established. This enables us to almost seamlessly adapt the combinatorial and algorithmic results known for tree decompositions of graphs to the case of hypergraphs and obtain fast exact algorithms. As a consequence, we provide algorithms which, given a hypergraph H on n vertices and m hyperedges, compute the generalized hypertree-width of H in time O*(2n) and compute the fractional hypertree-width of H in time O(1.734601n.m).1
Resumo:
This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.
Resumo:
Our purpose in this article is to define a network structure which is based on two egos instead of the egocentered (one ego) or the complete network (n egos). We describe the characteristics and properties for this kind of network which we call “nosduocentered network”, comparing it with complete and egocentered networks. The key point for this kind of network is that relations exist between the two main egos and all alters, but relations among others are not observed. After that, we use new social network measures adapted to the nosduocentered network, some of which are based on measures for complete networks such as degree, betweenness, closeness centrality or density, while some others are tailormade for nosduocentered networks. We specify three regression models to predict research performance of PhD students based on these social network measures for different networks such as advice, collaboration, emotional support and trust. Data used are from Slovenian PhD students and their s
Resumo:
Es descriu l'aproximació de Capes Atòmiques dins de la teoria de la Semblança Molecular Quàntica. Partint només de dades teòriques, s'ha trobat una relació entre estructura molecular i activitat biològica per a diversos conjunts de molècules. Es descriuen els aspectes teòrics de la Semblança Molecular Quàntica i alguns exemples d'aplicació
Resumo:
We put together the different conceptual issues involved in measuring inequality of opportunity, discuss how these concepts have been translated into computable measures, and point out the problems and choices researchers face when implementing these measures. Our analysis identifies and suggests several new possibilities to measure inequality of opportunity. The approaches are illustrated with a selective survey of the empirical literature on income inequality of opportunity.
Resumo:
A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory
Resumo:
Background: Few studies have used longitudinal ultrasound measurements to assess the effect of traffic-related air pollution on fetal growth.Objective: We examined the relationship between exposure to nitrogen dioxide (NO2) and aromatic hydrocarbons [benzene, toluene, ethylbenzene, m/p-xylene, and o-xylene (BTEX)] on fetal growth assessed by 1,692 ultrasound measurements among 562 pregnant women from the Sabadell cohort of the Spanish INMA (Environment and Childhood) study.Methods: We used temporally adjusted land-use regression models to estimate exposures to NO2 and BTEX. We fitted mixed-effects models to estimate longitudinal growth curves for femur length (FL), head circumference (HC), abdominal circumference (AC), biparietal diameter (BPD), and estimated fetal weight (EFW). Unconditional and conditional SD scores were calculated at 12, 20, and 32 weeks of gestation. Sensitivity analyses were performed considering time–activity patterns during pregnancy.Results: Exposure to BTEX from early pregnancy was negatively associated with growth in BPD during weeks 20–32. None of the other fetal growth parameters were associated with exposure to air pollution during pregnancy. When considering only women who spent 2 hr/day in nonresidential outdoor locations, effect estimates were stronger and statistically significant for the association between NO2 and growth in HC during weeks 12–20 and growth in AC, BPD, and EFW during weeks 20–32.Conclusions: Our results lend some support to an effect of exposure to traffic-related air pollutants from early pregnancy on fetal growth during mid-pregnancy.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
The article examines the structure of the collaboration networks of research groups where Slovenian and Spanish PhD students are pursuing their doctorate. The units of analysis are student-supervisor dyads. We use duocentred networks, a novel network structure appropriate for networks which are centred around a dyad. A cluster analysis reveals three typical clusters of research groups. Those which are large and belong to several institutions are labelled under a bridging social capital label. Those which are small, centred in a single institution but have high cohesion are labelled as bonding social capital. Those which are small and with low cohesion are called weak social capital groups. Academic performance of both PhD students and supervisors are highest in bridging groups and lowest in weak groups. Other variables are also found to differ according to the type of research group. At the end, some recommendations regarding academic and research policy are drawn
Resumo:
This paper shows how recently developed regression-based methods for the decomposition ofhealth inequality can be extended to incorporate heterogeneity in the responses of health to the explanatory variables. We illustrate our method with an application to the GHQ measure of psychological well-being taken from the British Household Panel Survey. The results suggest that there is an important degree of heterogeneity in the association of health to explanatory variables across birth cohorts and genders which, in turn, accounts for a substantial percentage of the inequality in observed health.
Resumo:
The purpose of this paper is to examine the relation between government measures, volunteer participation, climate variables and forest fires. A number of studies have related forest fires to causes of ignition, to fire history in one area, to the type of vegetation and weathercharacteristics or to community institutions, but there is little research on the relation between fire production and government prevention and extinction measures from a policy evaluation perspective.An observational approach is first applied to select forest fires in the north east of Spain. Taking a selection of fires with a certain size, a multiple regression analysis is conducted to find significant relations between policy instruments under the control of the government and the number of hectares burn in each case, controlling at the same time the effect of weather conditions and other context variables. The paper brings evidence on the effects of simultaneity and the relevance of recurring to army soldiers in specific days with extraordinary high simultaneity. The analysis also brings light on the effectiveness of twopreventive policies and of helicopters for extinction tasks.
Resumo:
The goal of this paper is to present an optimal resource allocation model for the regional allocation of public service inputs. Theproposed solution leads to maximise the relative public service availability in regions located below the best availability frontier, subject to exogenous budget restrictions and equality ofaccess for equal need criteria (equity-based notion of regional needs). The construction of non-parametric deficit indicators is proposed for public service availability by a novel application of Data Envelopment Analysis (DEA) models, whose results offer advantages for the evaluation and improvement of decentralised public resource allocation systems. The method introduced in this paper has relevance as a resource allocation guide for the majority of services centrally funded by the public sector in a given country, such as health care, basic and higher education, citizen safety, justice, transportation, environmental protection, leisure, culture, housing and city planning, etc.
Resumo:
This paper shows how recently developed regression-based methods for thedecomposition of health inequality can be extended to incorporateindividual heterogeneity in the responses of health to the explanatoryvariables. We illustrate our method with an application to the CanadianNPHS of 1994. Our strategy for the estimation of heterogeneous responsesis based on the quantile regression model. The results suggest that thereis an important degree of heterogeneity in the association of health toexplanatory variables which, in turn, accounts for a substantial percentageof inequality in observed health. A particularly interesting finding isthat the marginal response of health to income is zero for healthyindividuals but positive and significant for unhealthy individuals. Theheterogeneity in the income response reduces both overall health inequalityand income related health inequality.
Resumo:
In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.