903 resultados para Emigration and Immigration U.K.
Resumo:
The shadowing of cosmic ray primaries by the moon and sun was observed by the MINOS far detector at a depth of 2070 mwe using 83.54 million cosmic ray muons accumulated over 1857.91 live-days. The shadow of the moon was detected at the 5.6 sigma level and the shadow of the sun at the 3.8 sigma level using a log-likelihood search in celestial coordinates. The moon shadow was used to quantify the absolute astrophysical pointing of the detector to be 0.17 +/- 0.12 degrees. Hints of interplanetary magnetic field effects were observed in both the sun and moon shadow. Published by Elsevier B.V.
Resumo:
The Main Injector Neutrino Oscillation Search (MINOS) experiment uses an accelerator-produced neutrino beam to perform precision measurements of the neutrino oscillation parameters in the ""atmospheric neutrino"" sector associated with muon neutrino disappearance. This long-baseline experiment measures neutrino interactions in Fermilab`s NuMI neutrino beam with a near detector at Fermilab and again 735 km downstream with a far detector in the Soudan Underground Laboratory in northern Minnesota. The two detectors are magnetized steel-scintillator tracking calorimeters. They are designed to be as similar as possible in order to ensure that differences in detector response have minimal impact on the comparisons of event rates, energy spectra and topologies that are essential to MINOS measurements of oscillation parameters. The design, construction, calibration and performance of the far and near detectors are described in this paper. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We report 6 K-Ar ages and paleomagnetic data from 28 sites collected in Jurassic, Lower Cretaceous and Paleocene rocks of the Santa Marta massif, to test previous hypothesis of rotations and translations of this massif, whose rock assemblage differs from other basement-cored ranges adjacent to the Guyana margin. Three magnetic components were identified in this study. A first component has a direction parallel to the present magnetic field and was uncovered in all units (D 352, I = 25.6, k = 57.35, a95 = 5.3, N = 12). A second component was isolated in Cretaceous limestone and Jurassic volcaniclastic rocks (D = 8.8, I = 8.3, k = 24.71, a95 = 13.7, N = 6), and it was interpreted as of Early Cretaceous age. In Jurassic sites with this component, Early Cretaceous K-Ar ages obtained from this and previous studies are interpreted as reset ages. The third component was uncovered in eight sites of Jurassic volcaniclastic rocks, and its direction indicates negative shallow to moderate inclinations and northeastward declinations. K-Ar ages in these sites are of Early (196.5 +/- 4.9 Ma) to early Late Jurassic age (156.6 +/- 8.9 Ma). Due to local structural complexity and too few Cretaceous outcrops to perform a reliable unconformity test, we only used two sites with (1) K-Ar ages, (2) less structural complexity, and (3) reliable structural data for Jurassic and Cretaceous rocks. The mean direction of the Jurassic component is (D = 20.4, I = -18.2, k = 46.9, a95 = 5.1, n = 18 specimens from two sites). These paleomagnetic data support previous models of northward along-margin translations of Grenvillian-cored massifs. Additionally, clockwise vertical-axis rotation of this massif, with respect to the stable craton, is also documented; the sense of rotation is similar to that proposed for the Perija Range and other ranges of the southern Caribbean margin. More data is needed to confirm the magnitudes of rotations and translations. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Let ZG be the integral group ring of the finite nonabelian group G over the ring of integers Z, and let * be an involution of ZG that extends one of G. If x and y are elements of G, we investigate when pairs of the form (u(k,m)(x*), u(k,m)(x*)) or (u(k,m)(x), u(k,m)(y)), formed respectively by Bass cyclic and *-symmetric Bass cyclic units, generate a free noncyclic subgroup of the unit group of ZG.
Resumo:
High fat diets are extensively associated with health complications within the spectrum of the metabolic syndrome. Some of the most prevalent of these pathologies, often observed early in the development of high-fat dietary complications, are non-alcoholic fatty liver diseases. Mitochondrial bioenergetics and redox state changes are also widely associated with alterations within the metabolic syndrome. We investigated the mitochondrial effects of a high fat diet leading to non-alcoholic fatty liver disease in mice. We found that the diet does not substantially alter respiratory rates, ADP/O ratios or membrane potentials of isolated liver mitochondria. However, H(2)O(2) release using different substrates and ATP-sensitive K(+) transport activities are increased in mitochondria from animals on high fat diets. The increase in H(2)O(2) release rates was observed with different respiratory substrates and was not altered by modulators of mitochondrial ATP-sensitive K(+) channels, indicating it was not related to an observed increase in K(+) transport. Altogether, we demonstrate that mitochondria from animals with diet-induced steatosis do not present significant bioenergetic changes, but display altered ion transport and increased oxidant generation. This is the first evidence, to our knowledge, that ATP-sensitive K(+) transport in mitochondria can be modulated by diet.
Resumo:
The pulp- and paper production is a very energy intensive industry sector. Both Sweden and the U.S. are major pulpandpaper producers. This report examines the energy and the CO2-emission connected with the pulp- and paperindustry for the two countries from a lifecycle perspective.New technologies make it possible to increase the electricity production in the integrated pulp- andpaper mill through black liquor gasification and a combined cycle (BLGCC). That way, the mill canproduce excess electricity, which can be sold and replace electricity produced in power plants. In thisprocess the by-products that are formed at the pulp-making process is used as fuel to produce electricity.In pulp- and paper mills today the technology for generating energy from the by-product in aTomlinson boiler is not as efficient as it could be compared to the BLGCC technology. Scenarios havebeen designed to investigate the results from using the BLGCC technique using a life cycle analysis.Two scenarios are being represented by a 1994 mill in the U.S. and a 1994 mill in Sweden.The scenariosare based on the average energy intensity of pulp- and paper mills as operating in 1994 in the U.S.and Sweden respectively. The two other scenarios are constituted by a »reference mill« in the U.S. andSweden using state-of-the-art technology. We investigate the impact of varying recycling rates and totalenergy use and CO2-emissions from the production of printing and writing paper. To economize withthe wood and that way save trees, we can use the trees that are replaced by recycling in a biomassgasification combined cycle (BIGCC) to produce electricity in a power station. This produces extra electricitywith a lower CO2 intensity than electricity generated by, for example, coal-fired power plants.The lifecycle analysis in this thesis also includes the use of waste treatment in the paper lifecycle. Both Sweden and theU.S. are countries that recycle paper. Still there is a lot of paper waste, this paper is a part of the countries municipalsolid waste (MSW). A lot of the MSW is landfilled, but parts of it are incinerated to extract electricity. The thesis hasdesigned special scenarios for the use of MSW in the lifecycle analysis.This report is studying and comparing two different countries and two different efficiencies on theBLGCC in four different scenarios. This gives a wide survey and points to essential parameters to specificallyreflect on, when making assumptions in a lifecycle analysis. The report shows that there arethree key parameters that have to be carefully considered when making a lifecycle analysis of wood inan energy and CO2-emission perspective in the pulp- and paper mill in the U.S. and in Sweden. First,there is the energy efficiency in the pulp- and paper mill, then the efficiency of the BLGCC and last theCO2 intensity of the electricity displaced by BIGCC or BLGCC generatedelectricity. It also show that with the current technology that we havetoday, it is possible to produce CO2 free paper with a waste paper amountup to 30%. The thesis discusses the system boundaries and the assumptions.Further and more detailed research, including amongst others thesystem boundaries and forestry, is recommended for more specificanswers.
Resumo:
This thesis consists of a summary and four self-contained papers. Paper [I] Following the 1987 report by The World Commission on Environment and Development, the genuine saving has come to play a key role in the context of sustainable development, and the World Bank regularly publishes numbers for genuine saving on a national basis. However, these numbers are typically calculated as if the tax system is non-distortionary. This paper presents an analogue to genuine saving in a second best economy, where the government raises revenue by means of distortionary taxation. We show how the social cost of public debt, which depends on the marginal excess burden, ought to be reflected in the genuine saving. We also illustrate by presenting calculations for Greece, Japan, Portugal, U.K., U.S. and OECD average, showing that the numbers published by the World Bank are likely to be biased and may even give incorrect information as to whether the economy is locally sustainable. Paper [II] This paper examines the relationships among per capita CO2 emissions, per capita GDP and international trade based on panel data spanning the period 1960-2008 for 150 countries. A distinction is also made between OECD and Non-OECD countries to capture the differences of this relationship between developed and developing economies. We apply panel unit root and cointegration tests, and estimate a panel error correction model. The results from the error correction model suggest that there are long-term relationships between the variables for the whole sample and for Non-OECD countries. Finally, Granger causality tests show that there is bi-directional short-term causality between per capita GDP and international trade for the whole sample and between per capita GDP and CO2 emissions for OECD countries. Paper [III] Fundamental questions in economics are why some regions are richer than others, why their growth rates differ, whether their growth rates tend to converge, and what key factors contribute to explain economic growth. This paper deals with the average income growth, net migration, and changes in unemployment rates at the municipal level in Sweden. The aim is to explore in depth the effects of possible underlying determinants with a particular focus on local policy variables. The analysis is based on a three-equation model. Our results show, among other things, that increases in the local public expenditure and income taxe rate have negative effects on subsequent income income growth. In addition, the results show conditional convergence, i.e. that the average income among the municipal residents tends to grow more rapidly in relatively poor local jurisdictions than in initially “richer” jurisdictions, conditional on the other explanatory variables. Paper [IV] This paper explores the relationship between income growth and income inequality using data at the municipal level in Sweden for the period 1992-2007. We estimate a fixed effects panel data growth model, where the within-municipality income inequality is one of the explanatory variables. Different inequality measures (Gini coefficient, top income shares, and measures of inequality in the lower and upper part of the income distribution) are examined. We find a positive and significant relationship between income growth and income inequality measured as the Gini coefficient and top income shares, respectively. In addition, while inequality in the upper part of the income distribution is positively associated with the income growth rate, inequality in the lower part of the income distribution seems to be negatively related to the income growth. Our findings also suggest that increased income inequality enhances growth more in municipalities with a high level of average income than in municipalities with a low level of average income.
Resumo:
Housing is an important component of wealth for a typical household in many countries. The objective of this paper is to investigate the effect of real-estate price variation on welfare, trying to close a gap between the welfare literature in Brazil and that in the U.S., the U.K., and other developed countries. Our first motivation relates to the fact that real estate is probably more important here than elsewhere as a proportion of wealth, which potentially makes the impact of a price change bigger here. Our second motivation relates to the fact that real-estate prices boomed in Brazil in the last five years. Prime real estate in Rio de Janeiro and São Paulo have tripled in value in that period, and a smaller but generalized increase has been observed throughout the country. Third, we have also seen a recent consumption boom in Brazil in the last five years. Indeed, the recent rise of some of the poor to middle-income status is well documented not only for Brazil but for other emerging countries as well. Regarding consumption and real-estate prices in Brazil, one cannot imply causality from correlation, but one can do causal inference with an appropriate structural model and proper inference, or with a proper inference in a reduced-form setup. Our last motivation is related to the complete absence of studies of this kind in Brazil, which makes ours a pioneering study. We assemble a panel-data set for the determinants of non-durable consumption growth by Brazilian states, merging the techniques and ideas in Campbell and Cocco (2007) and in Case, Quigley and Shiller (2005). With appropriate controls, and panel-data methods, we investigate whether house-price variation has a positive effect on non-durable consumption. The results show a non-negligible significant impact of the change in the price of real estate on welfare consumption), although smaller then what Campbell and Cocco have found. Our findings support the view that the channel through which house prices affect consumption is a financial one.
Resumo:
This work deals with noise removal by the use of an edge preserving method whose parameters are automatically estimated, for any application, by simply providing information about the standard deviation noise level we wish to eliminate. The desired noiseless image u(x), in a Partial Differential Equation based model, can be viewed as the solution of an evolutionary differential equation u t(x) = F(u xx, u x, u, x, t) which means that the true solution will be reached when t ® ¥. In practical applications we should stop the time ''t'' at some moment during this evolutionary process. This work presents a sufficient condition, related to time t and to the standard deviation s of the noise we desire to remove, which gives a constant T such that u(x, T) is a good approximation of u(x). The approach here focused on edge preservation during the noise elimination process as its main characteristic. The balance between edge points and interior points is carried out by a function g which depends on the initial noisy image u(x, t0), the standard deviation of the noise we want to eliminate and a constant k. The k parameter estimation is also presented in this work therefore making, the proposed model automatic. The model's feasibility and the choice of the optimal time scale is evident through out the various experimental results.
Search for production of single top quarks via tcg and tug flavor-changing-neutral-current couplings
Resumo:
We search for the production of single top quarks via flavor-changing-neutral-current couplings of a gluon to the top quark and a charm (c) or up (u) quark. We analyze 230 pb(-1) of lepton+jets data from p (p) over tilde collisions at a center of mass energy of 1.96 TeV collected by the D0 detector at the Fermilab Tevatron Collider. We observe no significant deviation from standard model predictions, and hence set upper limits on the anomalous coupling parameters kappa(c)(g)/Lambda and kappa(u)(g)/Lambda, where kappa(g) define the strength of tcg and tug couplings, and Lambda defines the scale of new physics. The limits at 95% C.L. are kappa(c)(g)/Lambda < 0.15 TeV-1 and kappa(u)(g)/Lambda < 0.037 TeV-1.
Resumo:
We combine results from searches by the CDF and D0 collaborations for a standard model Higgs boson (H) in the process gg -> H -> W+W- in p (p) over bar collisions at the Fermilab Tevatron Collider at root s = 1.96 TeV. With 4.8 fb(-1) of integrated luminosity analyzed at CDF and 5.4 fb(-1) at D0, the 95% confidence level upper limit on sigma(gg -> H) x B(H -> W+W-) is 1.75 pb at m(H) = 120 GeV, 0.38 pb at m(H) = 165 GeV, and 0.83 pb at m(H) = 200 GeV. Assuming the presence of a fourth sequential generation of fermions with large masses, we exclude at the 95% confidence level a standard-model-like Higgs boson with a mass between 131 and 204 GeV.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
in the city of Limeira, southeastern Brazil, an important exposure of Permian sediments of the Parana basin was revealed by an open pit mine that exploits limestone for production of soil correction compounds and raw materials for the ceramic industry. The radioactivity of these sediments was investigated in some detail and the results provided a general view of the vertical distributions of uranium, thorium and potassium concentrations and of the element ratios U/K, U/Th and Th/K. In general, the concentrations of the main natural radioactive elements are low, with uranium being enriched in some limestone and shale levels. In addition the results showed that the U-238 series is in radioactive disequilibrium in many of the analyzed samples. Although the origin of the observed disequilibrium has not been investigated, the results suggest that it is due to weathering processes and water interaction with the rock matrix. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)