979 resultados para Budget and accounts
Resumo:
Aerosol sources, transport, and sinks are simulated, and aerosol direct radiative effects are assessed over the Indian Ocean for the Indian Ocean Experiment (INDOEX) Intensive Field Phase during January to March 1999 using the Laboratoire de Me´te´orologie Dynamique (LMDZT) general circulation model. The model reproduces the latitudinal gradient in aerosol mass concentration and optical depth (AOD). The model-predicted aerosol concentrations and AODs agree reasonably well with measurements but are systematically underestimated during high-pollution episodes, especially in the month of March. The largest aerosol loads are found over southwestern China, the Bay of Bengal, and the Indian subcontinent. Aerosol emissions from the Indian subcontinent are transported into the Indian Ocean through either the west coast or the east coast of India. Over the INDOEX region, carbonaceous aerosols are the largest contributor to the estimated AOD, followed by sulfate, dust, sea salt, and fly ash. During the northeast winter monsoon, natural and anthropogenic aerosols reduce the solar flux reaching the surface by 25 W m�2, leading to 10–15% less insolation at the surface. A doubling of black carbon (BC) emissions from Asia results in an aerosol single-scattering albedo that is much smaller than in situ measurements, reflecting the fact that BC emissions are not underestimated in proportion to other (mostly scattering) aerosol types. South Asia is the dominant contributor to sulfate aerosols over the INDOEX region and accounts for 60–70% of the AOD by sulfate. It is also an important but not the dominant contributor to carbonaceous aerosols over the INDOEX region with a contribution of less than 40% to the AOD by this aerosol species. The presence of elevated plumes brings significant quantities of aerosols to the Indian Ocean that are generated over Africa and Southeast and east Asia.
Resumo:
Sea ice plays a crucial role in the earth's energy and water budget and substantially impacts local and remote atmospheric and oceanic circulations. Predictions of Arctic sea ice conditions a few months to a few years in advance could be of interest for stakeholders. This article presents a review of the potential sources of Arctic sea ice predictability on these timescales. Predictability mainly originates from persistence or advection of sea ice anomalies, interactions with the ocean and atmosphere and changes in radiative forcing. After estimating the inherent potential predictability limit with state-of-the-art models, current sea ice forecast systems are described, together with their performance. Finally, some challenges and issues in sea ice forecasting are presented, along with suggestions for future research priorities.
Resumo:
The Maritime Continent archipelago, situated on the equator at 95-165E, has the strongest land-based precipitation on Earth. The latent heat release associated with the rainfall affects the atmospheric circulation throughout the tropics and into the extra-tropics. The greatest source of variability in precipitation is the diurnal cycle. The archipelago is within the convective region of the Madden-Julian Oscillation (MJO), which provides the greatest variability on intra-seasonal time scales: large-scale (∼10^7 km^2) active and suppressed convective envelopes propagate slowly (∼5 m s^-1) eastwards between the Indian and Pacific Oceans. High-resolution satellite data show that a strong diurnal cycle is triggered to the east of the advancing MJO envelope, leading the active MJO by one-eighth of an MJO cycle (∼6 days). Where the diurnal cycle is strong its modulation accounts for 81% of the variability in MJO precipitation. Over land this determines the structure of the diagnosed MJO. This is consistent with the equatorial wave dynamics in existing theories of MJO propagation. The MJO also affects the speed of gravity waves propagating offshore from the Maritime Continent islands. This is largely consistent with changes in static stability during the MJO cycle. The MJO and its interaction with the diurnal cycle are investigated in HiGEM, a high-resolution coupled model. Unlike many models, HiGEM represents the MJO well with eastward-propagating variability on intra-seasonal time scales at the correct zonal wavenumber, although the inter-tropical convergence zone's precipitation peaks strongly at the wrong time, interrupting the MJO's spatial structure. However, the modelled diurnal cycle is too weak and its phase is too early over land. The modulation of the diurnal amplitude by the MJO is also too weak and accounts for only 51% of the variability in MJO precipitation. Implications for forecasting and possible causes of the model errors are discussed, and further modelling studies are proposed.
Resumo:
In recent years, research into the impact of genetic abnormalities on cognitive development, including language, has become recognized for its potential to make valuable contributions to our understanding of the brain–behaviour relationships underlying language acquisition as well as to understanding the cognitive architecture of the human mind. The publication of Fodor’s ( 1983 ) book The Modularity of Mind has had a profound impact on the study of language and the cognitive architecture of the human mind. Its central claim is that many of the processes involved in comprehension are undertaken by special brain systems termed ‘modules’. This domain specificity of language or modularity has become a fundamental feature that differentiates competing theories and accounts of language acquisition (Fodor 1983 , 1985 ; Levy 1994 ; Karmiloff-Smith 1998 ). However, although the fact that the adult brain is modularized is hardly disputed, there are different views of how brain regions become specialized for specific functions. A question of some interest to theorists is whether the human brain is modularized from the outset (nativist view) or whether these distinct brain regions develop as a result of biological maturation and environmental input (neuroconstructivist view). One source of insight into these issues has been the study of developmental disorders, and in particular genetic syndromes, such as Williams syndrome (WS) and Down syndrome (DS). Because of their uneven profiles characterized by dissociations of different cognitive skills, these syndromes can help us address theoretically significant questions. Investigations into the linguistic and cognitive profiles of individuals with these genetic abnormalities have been used as evidence to advance theoretical views about innate modularity and the cognitive architecture of the human mind. The present chapter will be organized as follows. To begin, two different theoretical proposals in the modularity debate will be presented. Then studies of linguistic abilities in WS and in DS will be reviewed. Here, the emphasis will be mainly on WS due to the fact that theoretical debates have focused primarily on WS, there is a larger body of literature on WS, and DS subjects have typically been used for the purposes of comparison. Finally, the modularity debate will be revisited in light of the literature review of both WS and DS. Conclusions will be drawn regarding the contribution of these two genetic syndromes to the issue of cognitive modularity, and in particular innate modularity.
Resumo:
This paper studies the impact of exogenous and endogenous shocks (exogenous shock is used interchangeably with external shock; endogenous shock is used interchangeably with domestic shock) on output fluctuations in post-communist countries during the 2000s. The first part presents the analytical framework and formulates a research hypothesis. The second part presents vector autoregressive estimation and analysis model proposed by Pesaran (2004) and Pesaran and Smith (2006) that relates bank real lending, the cyclical component of output and spreads and accounts for cross-sectional dependence (CD) across the countries. Impulse response functions show that exogenous positive shock lead to a drop in output sustainability for 9 over 12 Central Eastern European countries and Russia, when the endogenous shock is mild and ambiguous. Moreover, the effect of exogenous shock is more significant during the crises. Variance decompositions show that exogenous shock in the aftermath of crisis had a substantial impact on economic activity of emerging economies.
Resumo:
Accurate knowledge of the location and magnitude of ocean heat content (OHC) variability and change is essential for understanding the processes that govern decadal variations in surface temperature, quantifying changes in the planetary energy budget, and developing constraints on the transient climate response to external forcings. We present an overview of the temporal and spatial characteristics of OHC variability and change as represented by an ensemble of dynamical and statistical ocean reanalyses (ORAs). Spatial maps of the 0–300 m layer show large regions of the Pacific and Indian Oceans where the interannual variability of the ensemble mean exceeds ensemble spread, indicating that OHC variations are well-constrained by the available observations over the period 1993–2009. At deeper levels, the ORAs are less well-constrained by observations with the largest differences across the ensemble mostly associated with areas of high eddy kinetic energy, such as the Southern Ocean and boundary current regions. Spatial patterns of OHC change for the period 1997–2009 show good agreement in the upper 300 m and are characterized by a strong dipole pattern in the Pacific Ocean. There is less agreement in the patterns of change at deeper levels, potentially linked to differences in the representation of ocean dynamics, such as water mass formation processes. However, the Atlantic and Southern Oceans are regions in which many ORAs show widespread warming below 700 m over the period 1997–2009. Annual time series of global and hemispheric OHC change for 0–700 m show the largest spread for the data sparse Southern Hemisphere and a number of ORAs seem to be subject to large initialization ‘shock’ over the first few years. In agreement with previous studies, a number of ORAs exhibit enhanced ocean heat uptake below 300 and 700 m during the mid-1990s or early 2000s. The ORA ensemble mean (±1 standard deviation) of rolling 5-year trends in full-depth OHC shows a relatively steady heat uptake of approximately 0.9 ± 0.8 W m−2 (expressed relative to Earth’s surface area) between 1995 and 2002, which reduces to about 0.2 ± 0.6 W m−2 between 2004 and 2006, in qualitative agreement with recent analysis of Earth’s energy imbalance. There is a marked reduction in the ensemble spread of OHC trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints. The current work emphasizes the need to better observe the deep ocean, both for providing observational constraints for future ocean state estimation efforts and also to develop improved models and data assimilation methods.
Resumo:
Activities involving fauna monitoring are usually limited by the lack of resources; therefore, the choice of a proper and efficient methodology is fundamental to maximize the cost-benefit ratio. Both direct and indirect methods can be used to survey mammals, but the latter are preferred due to the difficulty to come in sight of and/or to capture the individuals, besides being cheaper. We compared the performance of two methods to survey medium and large-sized mammal: track plot recording and camera trapping, and their costs were assessed. At Jatai Ecological Station (S21 degrees 31`15 ``- W47 degrees 34`42 ``-Brazil) we installed ten camera traps along a dirt road directly in front of ten track plots, and monitored them for 10 days. We cleaned the plots, adjusted the cameras, and noted down the recorded species daily. Records taken by both methods showed they sample the local richness in different ways (Wilcoxon, T=231; p;;0.01). The track plot method performed better on registering individuals whereas camera trapping provided records which permitted more accurate species identification. The type of infra-red sensor camera used showed a strong bias towards individual body mass (R(2)=0.70; p=0.017), and the variable expenses of this method in a 10-day survey were estimated about 2.04 times higher compared to track plot method; however, in a long run camera trapping becomes cheaper than track plot recording. Concluding, track plot recording is good enough for quick surveys under a limited budget, and camera trapping is best for precise species identification and the investigation of species details, performing better for large animals. When used together, these methods can be complementary.
Resumo:
The calls urging colleges and universities to improve their productivity are coming thick and fast in Brazil. Many studies are suggesting evaluation systems and external criteria to control universities production in qualitative terms. Since universities and colleges are not profit-oriented organizations (considering just the fair and serious researching and teaching organizations, of course) the traditional microeconomics and administrative variables used to measure efficiency do not have any direct function. In this sense, It could be created a as if market control system to evaluate universities and colleges production. The budget and the allocation resources mechanism inside it can be used as an incentive instrument to improve quality and productivity. It will be the main issue of this paper.
Resumo:
This manuscript demonstrates that voters have nothing to be afraid of when new hard budget constraint legislation is implemented. Our claim is that this kind of legislation reduces the asymmetry of information between voters and incumbents over the budget and, as a consequence, the latter have incentives to increase the supply of public goods. As a nationwide institutional innovation, the Fiscal Responsibility Law (FRL) is exogenous to all municipalities; therefore, there is no self-selection bias in its implementation. We show that public goods expenditure increases after the FRL. Second, this increase occurs in municipalities located in the country’s poorest region. Third, our findings can be extended to the supply of public goods because the higher the expenditure with health and education, the greater the probability of incumbents being re-elected. Finally, there exists a “de facto” higher supply of public goods in education (number of per capita classrooms) after the FRL.
Resumo:
The literature has emphasized that absorptive capacity (AC) leads to performance, but in projects its influences still unclear. Additionally, the project success is not well understood by the literature, and AC can be an important mechanism to explain it. Therefore, the purpose of this study is to investigate the effect of absorptive capacity on project performance in the construction industry of São Paulo State. We study this influence through potential and realized absorptive capacity proposed by Zahra and George (2002). For achieving this goal, we use a combination of qualitative and quantitative research. The qualitative research is based on 15 interviews with project managers in different sectors to understand the main constructs and support the next quantitative phase. The content analysis was the technique used to analyze those interviews. In quantitative phase through a survey questionnaire, we collected 157 responses in the construction sector with project managers. The confirmatory factor analysis and hierarchical linear regression were the techniques used to assess the data. Our findings suggest that the realized absorptive capacity has a positive influence on performance, but potential absorptive capacity and the interactions effect have no influence on performance. Moreover, the planning and monitoring have a positive impact on budget and schedule, and customer satisfaction while risk coping capacity has a positive impact on business success. In academics terms, this research enables a better understanding of the importance of absorptive capacity in the construction industry and it confirms that knowledge application in processes and routines enhances performance. For management, the absorptive capacity enables the improvements of internal capabilities reflected in the increased project management efficiency. Indeed, when a company manages project practices efficiently it enhances business and project performance; however, it needs initially to improve its internal abilities to enrich processes and routines through relevant knowledge.
Resumo:
This thesis contains three chapters. The first chapter uses a general equilibrium framework to simulate and compare the long run effects of the Patient Protection and Affordable Care Act (PPACA) and of health care costs reduction policies on macroeconomic variables, government budget, and welfare of individuals. We found that all policies were able to reduce uninsured population, with the PPACA being more effective than cost reductions. The PPACA increased public deficit mainly due to the Medicaid expansion, forcing tax hikes. On the other hand, cost reductions alleviated the fiscal burden of public insurance, reducing public deficit and taxes. Regarding welfare effects, the PPACA as a whole and cost reductions are welfare improving. High welfare gains would be achieved if the U.S. medical costs followed the same trend of OECD countries. Besides, feasible cost reductions are more welfare improving than most of the PPACA components, proving to be a good alternative. The second chapter documents that life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this chapter, we relax this assumption – not supported by the data – and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets. The third chapter documents that a common assumption adopted in life cycle general equilibrium models is that the population is stable at steady state, that is, its relative age distribution becomes constant over time. An open question is whether the demographic assumptions commonly adopted in these models in fact imply that the population becomes stable. In this chapter we prove the existence of a stable population in a demographic environment where both the age-specific mortality rates and the population growth rate are constant over time, the setup commonly adopted in life cycle general equilibrium models. Hence, the stability of the population do not need to be taken as assumption in these models.
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We present a generic spatially explicit modeling framework to estimate carbon emissions from deforestation (INPE-EM). The framework incorporates the temporal dynamics related to the deforestation process and accounts for the biophysical and socioeconomic heterogeneity of the region under study. We build an emission model for the Brazilian Amazon combining annual maps of new clearings, four maps of biomass, and a set of alternative parameters based on the recent literature. The most important results are as follows: (a) Using different biomass maps leads to large differences in estimates of emission; for the entire region of the Brazilian Amazon in the last decade, emission estimates of primary forest deforestation range from 0.21 to 0.26 similar to Pg similar to C similar to yr-1. (b) Secondary vegetation growth presents a small impact on emission balance because of the short duration of secondary vegetation. In average, the balance is only 5% smaller than the primary forest deforestation emissions. (c) Deforestation rates decreased significantly in the Brazilian Amazon in recent years, from 27 similar to Mkm2 in 2004 to 7 similar to Mkm2 in 2010. INPE-EM process-based estimates reflect this decrease even though the agricultural frontier is moving to areas of higher biomass. The decrease is slower than a non-process instantaneous model would estimate as it considers residual emissions (slash, wood products, and secondary vegetation). The average balance, considering all biomass, decreases from 0.28 in 2004 to 0.15 similar to Pg similar to C similar to yr-1 in 2009; the non-process model estimates a decrease from 0.33 to 0.10 similar to Pg similar to C similar to yr-1. We conclude that the INPE-EM is a powerful tool for representing deforestation-driven carbon emissions. Biomass estimates are still the largest source of uncertainty in the effective use of this type of model for informing mechanisms such as REDD+. The results also indicate that efforts to reduce emissions should focus not only on controlling primary forest deforestation but also on creating incentives for the restoration of secondary forests.
Resumo:
Includes bibliography