910 resultados para schooling, productivity effects, upper bound
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
We develop a taxonomy that relates foreign direct investment (FDI) motivation (technology- and cost-based) to its anticipated effects on host countries domestic productivity. We then empirically examine the effects of FDI into the United Kingdom on domestic productivity, and find that different types of FDI have markedly different productivity spillover effects, which are consistent with the conceptual analysis. The UK gains substantially only from inward FDI motivated by a strong technology-based ownership advantage. As theory predicts, inward FDI motivated by technology-sourcing considerations leads to no productivity spillovers.
Resumo:
2000 Mathematics Subject Classification: 05C55.
Resumo:
In undertaking an analysis of neighbouring effects on European regional patterns of specialization, this paper makes two main contributions to the literature. First, we use a spatial weight matrix that takes into consideration membership of an EU cross-border regional association. We then compare our results with those obtained using a contiguity matrix and constitute an upper bound for our parameter of interest. In a further stage, we divide the CBR associations on the basis of their longstanding and the intensity of their cooperation to determine whether the association type has a significant impact. Second, we examine the sensitivity of our results to the use of alternative relative specialization indices.
Resumo:
Consideration of the influence of test technique and data analysis method is important for data comparison and design purposes. The paper highlights the effects of replication interval, crack growth rate averaging and curve-fitting procedures on crack growth rate results for a Ni-base alloy. It is shown that an upper bound crack growth rate line is not appropriate for use in fatigue design, and that the derivative of a quadratic fit to the a vs N data looks promising. However, this type of averaging, or curve fitting, is not useful in developing an understanding of microstructure/crack tip interactions. For this purpose, simple replica-to-replica growth rate calculations are preferable. © 1988.
Resumo:
Let g be the genus of the Hermitian function field H/F(q)2 and let C-L(D,mQ(infinity)) be a typical Hermitian code of length n. In [Des. Codes Cryptogr., to appear], we determined the dimension/length profile (DLP) lower bound on the state complexity of C-L(D,mQ(infinity)). Here we determine when this lower bound is tight and when it is not. For m less than or equal to n-2/2 or m greater than or equal to n-2/2 + 2g, the DLP lower bounds reach Wolf's upper bound on state complexity and thus are trivially tight. We begin by showing that for about half of the remaining values of m the DLP bounds cannot be tight. In these cases, we give a lower bound on the absolute state complexity of C-L(D,mQ(infinity)), which improves the DLP lower bound. Next we give a good coordinate order for C-L(D,mQ(infinity)). With this good order, the state complexity of C-L(D,mQ(infinity)) achieves its DLP bound (whenever this is possible). This coordinate order also provides an upper bound on the absolute state complexity of C-L(D,mQ(infinity)) (for those values of m for which the DLP bounds cannot be tight). Our bounds on absolute state complexity do not meet for some of these values of m, and this leaves open the question whether our coordinate order is best possible in these cases. A straightforward application of these results is that if C-L(D,mQ(infinity)) is self-dual, then its state complexity (with respect to the lexicographic coordinate order) achieves its DLP bound of n /2 - q(2)/4, and, in particular, so does its absolute state complexity.
Resumo:
The effects of the nongray absorption (i.e., atmospheric opacity varying with wavelength) on the possible upper bound of the outgoing longwave radiation (OLR) emitted by a planetary atmosphere have been examined. This analysis is based on the semigray approach, which appears to be a reasonable compromise between the complexity of nongray models and the simplicity of the gray assumption (i.e., atmospheric absorption independent of wavelength). Atmospheric gases in semigray atmospheres make use of constant absorption coefficients in finite-width spectral bands. Here, such a semigray absorption is introduced in a one-dimensional (1D) radiative– convective model with a stratosphere in radiative equilibrium and a troposphere fully saturated with water vapor, which is the semigray gas. A single atmospheric window in the infrared spectrum has been assumed. In contrast to the single absolute limit of OLR found in gray atmospheres, semigray ones may also show a relative limit. This means that both finite and infinite runaway effects may arise in some semigray cases. Of particular importance is the finding of an entirely new branch of stable steady states that does not appear in gray atmospheres. This new multiple equilibrium is a consequence of the nongray absorption only. It is suspected that this new set of stable solutions has not been previously revealed in analyses of radiative–convective models since it does not appear for an atmosphere with nongray parameters similar to those for the earth’s current state
Resumo:
The longwave emission of planetary atmospheres that contain a condensable absorbing gas in the infrared (i.e., longwave), which is in equilibrium with its liquid phase at the surface, may exhibit an upper bound. Here we analyze the effect of the atmospheric absorption of sunlight on this radiation limit. We assume that the atmospheric absorption of infrared radiation is independent of wavelength except within the spectral width of the atmospheric window, where it is zero. The temperature profile in radiative equilibrium is obtained analytically as a function of the longwave optical thickness. For illustrative purposes, numerical values for the infrared atmospheric absorption (i.e., greenhouse effect) and the liquid vapor equilibrium curve of the condensable absorbing gas refer to water. Values for the atmospheric absorption of sunlight (i.e., antigreenhouse effect) take a wide range since our aim is to provide a qualitative view of their effects. We find that atmospheres with a transparent region in the infrared spectrum do not present an absolute upper bound on the infrared emission. This result may be also found in atmospheres opaque at all infrared wavelengths if the fraction of absorbed sunlight in the atmosphere increases with the longwave opacity
Resumo:
A number of OECD countries aim to encourage work integration of disabled persons using quota policies. For instance, Austrian firms must provide at least one job to a disabled worker per 25 nondisabled workers and are subject to a tax if they do not. This "threshold design" provides causal estimates of the noncompliance tax on disabled employment if firms do not manipulate nondisabled employment; a lower and upper bound on the causal effect can be constructed if they do. Results indicate that firms with 25 nondisabled workers employ about 0.04 (or 12%) more disabled workers than without the tax; firms do manipulate employment of nondisabled workers but the lower bound on the employment effect of the quota remains positive; employment effects are stronger in low-wage firms than in high-wage firms; and firms subject to the quota of two disabled workers or more hire 0.08 more disabled workers per additional quota job. Moreover, increasing the noncompliance tax increases excess disabled employment, whereas paying a bonus to overcomplying firms slightly dampens the employment effects of the tax.
Resumo:
In the Radiative Atmospheric Divergence Using ARM Mobile Facility GERB and AMMA Stations (RADAGAST) project we calculate the divergence of radiative flux across the atmosphere by comparing fluxes measured at each end of an atmospheric column above Niamey, in the African Sahel region. The combination of broadband flux measurements from geostationary orbit and the deployment for over 12 months of a comprehensive suite of active and passive instrumentation at the surface eliminates a number of sampling issues that could otherwise affect divergence calculations of this sort. However, one sampling issue that challenges the project is the fact that the surface flux data are essentially measurements made at a point, while the top-of-atmosphere values are taken over a solid angle that corresponds to an area at the surface of some 2500 km2. Variability of cloud cover and aerosol loading in the atmosphere mean that the downwelling fluxes, even when averaged over a day, will not be an exact match to the area-averaged value over that larger area, although we might expect that it is an unbiased estimate thereof. The heterogeneity of the surface, for example, fixed variations in albedo, further means that there is a likely systematic difference in the corresponding upwelling fluxes. In this paper we characterize and quantify this spatial sampling problem. We bound the root-mean-square error in the downwelling fluxes by exploiting a second set of surface flux measurements from a site that was run in parallel with the main deployment. The differences in the two sets of fluxes lead us to an upper bound to the sampling uncertainty, and their correlation leads to another which is probably optimistic as it requires certain other conditions to be met. For the upwelling fluxes we use data products from a number of satellite instruments to characterize the relevant heterogeneities and so estimate the systematic effects that arise from the flux measurements having to be taken at a single point. The sampling uncertainties vary with the season, being higher during the monsoon period. We find that the sampling errors for the daily average flux are small for the shortwave irradiance, generally less than 5 W m−2, under relatively clear skies, but these increase to about 10 W m−2 during the monsoon. For the upwelling fluxes, again taking daily averages, systematic errors are of order 10 W m−2 as a result of albedo variability. The uncertainty on the longwave component of the surface radiation budget is smaller than that on the shortwave component, in all conditions, but a bias of 4 W m−2 is calculated to exist in the surface leaving longwave flux.
Resumo:
The experimental variogram computed in the usual way by the method of moments and the Haar wavelet transform are similar in that they filter data and yield informative summaries that may be interpreted. The variogram filters out constant values; wavelets can filter variation at several spatial scales and thereby provide a richer repertoire for analysis and demand no assumptions other than that of finite variance. This paper compares the two functions, identifying that part of the Haar wavelet transform that gives it its advantages. It goes on to show that the generalized variogram of order k=1, 2, and 3 filters linear, quadratic, and cubic polynomials from the data, respectively, which correspond with more complex wavelets in Daubechies's family. The additional filter coefficients of the latter can reveal features of the data that are not evident in its usual form. Three examples in which data recorded at regular intervals on transects are analyzed illustrate the extended form of the variogram. The apparent periodicity of gilgais in Australia seems to be accentuated as filter coefficients are added, but otherwise the analysis provides no new insight. Analysis of hyerpsectral data with a strong linear trend showed that the wavelet-based variograms filtered it out. Adding filter coefficients in the analysis of the topsoil across the Jurassic scarplands of England changed the upper bound of the variogram; it then resembled the within-class variogram computed by the method of moments. To elucidate these results, we simulated several series of data to represent a random process with values fluctuating about a mean, data with long-range linear trend, data with local trend, and data with stepped transitions. The results suggest that the wavelet variogram can filter out the effects of long-range trend, but not local trend, and of transitions from one class to another, as across boundaries.
Resumo:
Policies to control air quality focus on mitigating emissions of aerosols and their precursors, and other short-lived climate pollutants (SLCPs). On a local scale, these policies will have beneficial impacts on health and crop yields, by reducing particulate matter (PM) and surface ozone concentrations; however, the climate impacts of reducing emissions of SLCPs are less straightforward to predict. In this paper we consider a set of idealised, extreme mitigation strategies, in which the total anthropogenic emissions of individual SLCP emissions species are removed. This provides an upper bound on the potential climate impacts of such air quality strategies. We focus on evaluating the climate responses to changes in anthropogenic emissions of aerosol precursor species: black carbon (BC), organic carbon (OC) and sulphur dioxide (SO2). We perform climate integrations with four fully coupled atmosphere-ocean global climate models (AOGCMs), and examine the effects on global and regional climate of removing the total land-based anthropogenic emissions of each of the three aerosol precursor species. We find that the SO2 emissions reductions lead to the strongest response, with all three models showing an increase in surface temperature focussed in the northern hemisphere high latitudes, and a corresponding increase in global mean precipitation and run-off. Changes in precipitation and run-off patterns are driven mostly by a northward shift in the ITCZ, consistent with the hemispherically asymmetric warming pattern driven by the emissions changes. The BC and OC emissions reductions give a much weaker forcing signal, and there is some disagreement between models in the sign of the climate responses to these perturbations. These differences between models are due largely to natural variability in sea-ice extent, circulation patterns and cloud changes. This large natural variability component to the signal when the ocean circulation and sea-ice are free-running means that the BC and OC mitigation measures do not necessarily lead to a discernible climate response.
Resumo:
At very high energies we expect that the hadronic cross sections satisfy the Froissart bound, which is a well-established property of the strong interactions. In this energy regime we also expect the formation of the Color Glass Condensate, characterized by gluon saturation and a typical momentum scale: the saturation scale Q(s). In this paper we show that if a saturation window exists between the nonperturbative and perturbative regimes of Quantum Chromodynamics (QCD), the total cross sections satisfy the Froissart bound. Furthermore, we show that our approach allows us to described the high energy experimental data on pp/p (p) over bar total cross sections.
Resumo:
The multiprocessor task graph scheduling problem has been extensively studied asacademic optimization problem which occurs in optimizing the execution time of parallelalgorithm with parallel computer. The problem is already being known as one of the NPhardproblems. There are many good approaches made with many optimizing algorithmto find out the optimum solution for this problem with less computational time. One ofthem is branch and bound algorithm.In this paper, we propose a branch and bound algorithm for the multiprocessor schedulingproblem. We investigate the algorithm by comparing two different lower bounds withtheir computational costs and the size of the pruned tree.Several experiments are made with small set of problems and results are compared indifferent sections.