243 resultados para mandates
Resumo:
This work examines the political-economic relations between Brazil and Venezuela from 2003 to 2010, during the mandates of Luiz Inácio Lula da Silva and Hugo Chavez Frías. After a historical overview of Venezuela, by showing the first approximations, it sets out the cooperation projects and makes a categorization and study of the International Acts signed during the studied period. The graphs and tables allowed a quantitative and qualitative analysis. The growth of the international trade is also considered and studied. The results show prospects that may contribute to cooperation and international trade between the two countries
Resumo:
Since it was proposed, the Ibsen Pinheiro amendment has given rise to a huge debate in Brazilian society. This amendment proposes to share the revenue generated by royalties and special participations from offshore oil drilling between all municipalities and states, using for this the states and municipalities participation funds. This new distribution is a great rupture with the current model, which mandates that these resources should only be distributed to producing municipalities and states. Since the new proposal affects all offshore operations, and not only those in the new pre-salt areas, the producing municipalities and states would face an immediate loss of revenue in the scale of billions of reais. This study's objective is to analyze the validity of the criteria of redistribution of the government participations proposed by the Ibsen amendment. In this sense, we present the current Brazilian government participation model, focusing on the overfinancing of subnational spheres generated by this model, and compare it to a distribution simulation using the Ibsen amendment criteria. This comparison shows that the current beneficiaries would face a significant loss of revenue, and a distribution that favors the poorest regions of the country, consequence of the states and municipalities participation funds use. The study's last part discusses the various arguments in favor and against the amendment, especially those of constitutional character, most of which are contrary to the Ibsen amendment
Resumo:
This research aims to analyze the behavior of the central legislature in five key moments in the institutionalization of defense policy in Brazil: (i) the approval of the first version of the National Defense Policy, (ii) the creation of the Ministry of Defence (iii) the approval of the second version of the National Defense Policy, (iv) the approval of the National Defense Strategy, and (v) the approval of the Supplementary Law No. 136 of 2010 which, among other things, provides for the creation of the Book white National Defense. This process covers the mandates of the government of Fernando Henrique Cardoso and Luiz Inácio Lula da Silva (1995-2010). Besides the description of each of the above moments, we discuss the performance of the country on the agenda of regional and international security and are raised the resources available to Congress in order to strengthen their participation in the formulation of defense policy. The approval processes for each of the policies and laws are related to provide a view of the Legislative Power's capacity or not to change matters that in general have been proposed by the Executive Power. Finally, the study identifies how the progress in the participation of the Legislative Power in matters of defense policy was shy, but important because it signifies the increase of representation in popular theme
Resumo:
Pós-graduação em História - FCHS
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Educação - FFC
Resumo:
In the developed world, grid-connected photovoltaics (PVs) are the fastest-growing segment of the energy market. From 1999 to 2009, this industry had a 42% compound annual growth-rate. From 2009 to 2013, it is expected to grow to 45%, and in 2013 the achievement of grid parity - when the cost of solar electricity becomes competitive with conventional retail (including taxes and charges) grid-supplied electricity - is expected in many places worldwide. Grid-connected PV is usually perceived as an energy technology for developed countries, whereas isolated, stand-alone PV is considered as more suited for applications in developing nations, where so many individuals still lack access to electricity. This rationale is based on the still high costs of PV when compared with conventional electricity. We make the case for grid-connected PV generation in Brazil, showing that with the declining costs of PV and the rising prices of conventional electricity, urban populations in Brazil will also enjoy grid parity in the present decade. We argue that governments in developing nations should act promptly and establish the mandates and necessary conditions for their energy industry to accumulate experience in grid-connected PV, and make the most of this benign technology in the near future. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
The thesis main topic is the conflict between disclosure in financial markets and the need for confidentiality of the firm. After a recognition of the major dynamics of information production and dissemination in the stock market, the analysis moves to the interactions between the information that a firm is tipically interested in keeping confidential, such as trade secrets or the data usually covered by patent protection, and the countervailing demand for disclosure arising from finacial markets. The analysis demonstrates that despite the seeming divergence between informational contents tipically disclosed to investors and information usually covered by intellectual property protection, the overlapping areas are nonetheless wide and the conflict between transparency in financial markets and the firm’s need for confidentiality arises frequently and sistematically. Indeed, the company’s disclosure policy is based on a continuous trade-off between the costs and the benefits related to the public dissemination of information. Such costs are mainly represented by the competitive harm caused by competitors’ access to sensitive data, while the benefits mainly refer to the lower cost of capital that the firm obtains as a consequence of more disclosure. Secrecy shields the value of costly produced information against third parties’ free riding and constitutes therefore a means to protect the firm’s incentives toward the production of new information and especially toward technological and business innovation. Excessively demanding standards of transparency in financial markets might hinder such set of incentives and thus jeopardize the dynamics of innovation production. Within Italian securities regulation, there are two sets of rules mostly relevant with respect to such an issue: the first one is the rule that mandates issuers to promptly disclose all price-sensitive information to the market on an ongoing basis; the second one is the duty to disclose in the prospectus all the information “necessary to enable investors to make an informed assessment” of the issuers’ financial and economic perspectives. Both rules impose high disclosure standards and have potentially unlimited scope. Yet, they have safe harbours aimed at protecting the issuer need for confidentiality. Despite the structural incompatibility between public dissemination of information and the firm’s need to keep certain data confidential, there are certain ways to convey information to the market while preserving at the same time the firm’s need for confidentality. Such means are insider trading and selective disclosure: both are based on mechanics whereby the process of price reaction to the new information takes place without any corresponding activity of public release of data. Therefore, they offer a solution to the conflict between disclosure and the need for confidentiality that enhances market efficiency and preserves at the same time the private set of incentives toward innovation.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
This research was designed to answer the question of which direction the restructuring of financial regulators should take – consolidation or fragmentation. This research began by examining the need for financial regulation and its related costs. It then continued to describe what types of regulatory structures exist in the world; surveying the regulatory structures in 15 jurisdictions, comparing them and discussing their strengths and weaknesses. This research analyzed the possible regulatory structures using three methodological tools: Game-Theory, Institutional-Design, and Network-Effects. The incentives for regulatory action were examined in Chapter Four using game theory concepts. This chapter predicted how two regulators with overlapping supervisory mandates will behave in two different states of the world (where they can stand to benefit from regulating and where they stand to lose). The insights derived from the games described in this chapter were then used to analyze the different supervisory models that exist in the world. The problem of information-flow was discussed in Chapter Five using tools from institutional design. The idea is based on the need for the right kind of information to reach the hands of the decision maker in the shortest time possible in order to predict, mitigate or stop a financial crisis from occurring. Network effects and congestion in the context of financial regulation were discussed in Chapter Six which applied the literature referring to network effects in general in an attempt to conclude whether consolidating financial regulatory standards on a global level might also yield other positive network effects. Returning to the main research question, this research concluded that in general the fragmented model should be preferable to the consolidated model in most cases as it allows for greater diversity and information-flow. However, in cases in which close cooperation between two authorities is essential, the consolidated model should be used.
Resumo:
Three-month anticoagulation is recommended to treat provoked or first distal deep-vein thrombosis (DVT), and indefinite-duration anticoagulation should be considered for patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. In the prospective Outpatient Treatment of Deep Vein Thrombosis in Switzerland (OTIS-DVT) Registry of 502 patients with acute objectively confirmed lower extremity DVT (59% provoked or first distal DVT; 41% unprovoked proximal, unprovoked recurrent, or cancer-associated DVT) from 53 private practices and 11 hospitals, we investigated the planned duration of anticoagulation at the time of treatment initiation. The decision to administer limited-duration anticoagulation therapy was made in 343 (68%) patients with a median duration of 107 (interquartile range 91-182) days for provoked or first distal DVT, and 182 (interquartile range 111-184) days for unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. Among patients with provoked or first distal DVT, anticoagulation was recommended for < 3 months in 11%, 3 months in 63%, and for an indefinite period in 26%. Among patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT, anticoagulation was recommended for < 6 months in 22%, 6-12 months in 38%, and for an indefinite period in 40%. Overall, there was more frequent planning of indefinite-duration therapy from hospital physicians as compared with private practice physicians (39% vs. 28%; p=0.019). Considerable inconsistency in planning the duration of anticoagulation therapy mandates an improvement in risk stratification of outpatients with acute DVT.
Resumo:
Municipalities in the United States have for the past two decades initiated two policies to reduce residential solid waste generation by increasing recycling. The first policy, implemented in over 4,000 municipalities in the United States, requires households to pay a fee for each unit of garbage presented at the curb for collection. The second policy, initiated in 8,875 municipalities, subsidizes household recycling efforts by providing free curbside collection of certain recyclable materials. Both initiatives serve as examples of incentive-based environmental policies favored by many economists. But before economists can celebrate this wide-spread adoption of incentive-based environmental policies, further examination reveals that potentially inefficient command and control policies have been more instrumental in promoting recycling than might be commonly known. This article examines the empirical lessons gained from studying twenty years of solid waste policy in the United States and argues for the replacement of several state recycling mandates with a system of state and/or national landfill taxes.
Resumo:
Decompressive craniectomy (DC) due to intractably elevated intracranial pressure mandates later cranioplasty (CP). However, the optimal timing of CP remains controversial. We therefore analyzed our prospectively conducted database concerning the timing of CP and associated post-operative complications. From October 1999 to August 2011, 280 cranioplasty procedures were performed at the authors' institution. Patients were stratified into two groups according to the time from DC to cranioplasty (early, ≤2 months, and late, >2 months). Patient characteristics, timing of CP, and CP-related complications were analyzed. Overall CP was performed early in 19% and late in 81%. The overall complication rate was 16.4%. Complications after CP included epidural or subdural hematoma (6%), wound healing disturbance (5.7%), abscess (1.4%), hygroma (1.1%), cerebrospinal fluid fistula (1.1%), and other (1.1%). Patients who underwent early CP suffered significantly more often from complications compared to patients who underwent late CP (25.9% versus 14.2%; p=0.04). Patients with ventriculoperitoneal (VP) shunt had a significantly higher rate of complications after CP compared to patients without VP shunt (p=0.007). On multivariate analysis, early CP, the presence of a VP shunt, and intracerebral hemorrhage as underlying pathology for DC, were significant predictors of post-operative complications after CP. We provide detailed data on surgical timing and complications for cranioplasty after DC. The present data suggest that patients who undergo late CP might benefit from a lower complication rate. This might influence future surgical decision making regarding optimal timing of cranioplasty.
Resumo:
Consultation is promoted throughout school psychology literature as a best practice in service delivery. This method has numerous benefits including being able to work with more students at one time, providing practitioners with preventative rather than strictly reactive strategies, and helping school professionals meet state and federal education mandates and initiatives. Despite the benefits of consultation, teachers are sometimes resistant to this process.This research studies variables hypothesized to lead to resistance (Gonzalez, Nelson, Gutkin, & Shwery, 2004) and attempts to distinguish differences between school level (elementary, middle and high school) with respect to the role played by these variables and to determine if the model used to identify students for special education services has an influence on resistance factors. Twenty-sixteachers in elementary and middle schools responded to a demographicquestionnaire and a survey developed by Gonzalez, et al. (2004). This survey measures eight variables related to resistance to consultation. No high school teachers responded to the request to participate. Results of analysis of variance indicated a significant difference in the teaching efficacy subscale with elementary teachers reporting more efficacy in teaching than middle school teachers. Results also indicate a significant difference in classroom managementefficacy with teachers who work in schools that identify students according to a Response to Intervention model reporting higher classroom management efficacy than teachers who work in schools that identify students according to a combined method of refer-test-place/RtI combination model. Implications, limitations and directions for future research are discussed.