272 resultados para Unfunded mandates
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
The thesis main topic is the conflict between disclosure in financial markets and the need for confidentiality of the firm. After a recognition of the major dynamics of information production and dissemination in the stock market, the analysis moves to the interactions between the information that a firm is tipically interested in keeping confidential, such as trade secrets or the data usually covered by patent protection, and the countervailing demand for disclosure arising from finacial markets. The analysis demonstrates that despite the seeming divergence between informational contents tipically disclosed to investors and information usually covered by intellectual property protection, the overlapping areas are nonetheless wide and the conflict between transparency in financial markets and the firm’s need for confidentiality arises frequently and sistematically. Indeed, the company’s disclosure policy is based on a continuous trade-off between the costs and the benefits related to the public dissemination of information. Such costs are mainly represented by the competitive harm caused by competitors’ access to sensitive data, while the benefits mainly refer to the lower cost of capital that the firm obtains as a consequence of more disclosure. Secrecy shields the value of costly produced information against third parties’ free riding and constitutes therefore a means to protect the firm’s incentives toward the production of new information and especially toward technological and business innovation. Excessively demanding standards of transparency in financial markets might hinder such set of incentives and thus jeopardize the dynamics of innovation production. Within Italian securities regulation, there are two sets of rules mostly relevant with respect to such an issue: the first one is the rule that mandates issuers to promptly disclose all price-sensitive information to the market on an ongoing basis; the second one is the duty to disclose in the prospectus all the information “necessary to enable investors to make an informed assessment” of the issuers’ financial and economic perspectives. Both rules impose high disclosure standards and have potentially unlimited scope. Yet, they have safe harbours aimed at protecting the issuer need for confidentiality. Despite the structural incompatibility between public dissemination of information and the firm’s need to keep certain data confidential, there are certain ways to convey information to the market while preserving at the same time the firm’s need for confidentality. Such means are insider trading and selective disclosure: both are based on mechanics whereby the process of price reaction to the new information takes place without any corresponding activity of public release of data. Therefore, they offer a solution to the conflict between disclosure and the need for confidentiality that enhances market efficiency and preserves at the same time the private set of incentives toward innovation.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
This research was designed to answer the question of which direction the restructuring of financial regulators should take – consolidation or fragmentation. This research began by examining the need for financial regulation and its related costs. It then continued to describe what types of regulatory structures exist in the world; surveying the regulatory structures in 15 jurisdictions, comparing them and discussing their strengths and weaknesses. This research analyzed the possible regulatory structures using three methodological tools: Game-Theory, Institutional-Design, and Network-Effects. The incentives for regulatory action were examined in Chapter Four using game theory concepts. This chapter predicted how two regulators with overlapping supervisory mandates will behave in two different states of the world (where they can stand to benefit from regulating and where they stand to lose). The insights derived from the games described in this chapter were then used to analyze the different supervisory models that exist in the world. The problem of information-flow was discussed in Chapter Five using tools from institutional design. The idea is based on the need for the right kind of information to reach the hands of the decision maker in the shortest time possible in order to predict, mitigate or stop a financial crisis from occurring. Network effects and congestion in the context of financial regulation were discussed in Chapter Six which applied the literature referring to network effects in general in an attempt to conclude whether consolidating financial regulatory standards on a global level might also yield other positive network effects. Returning to the main research question, this research concluded that in general the fragmented model should be preferable to the consolidated model in most cases as it allows for greater diversity and information-flow. However, in cases in which close cooperation between two authorities is essential, the consolidated model should be used.
Resumo:
Three-month anticoagulation is recommended to treat provoked or first distal deep-vein thrombosis (DVT), and indefinite-duration anticoagulation should be considered for patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. In the prospective Outpatient Treatment of Deep Vein Thrombosis in Switzerland (OTIS-DVT) Registry of 502 patients with acute objectively confirmed lower extremity DVT (59% provoked or first distal DVT; 41% unprovoked proximal, unprovoked recurrent, or cancer-associated DVT) from 53 private practices and 11 hospitals, we investigated the planned duration of anticoagulation at the time of treatment initiation. The decision to administer limited-duration anticoagulation therapy was made in 343 (68%) patients with a median duration of 107 (interquartile range 91-182) days for provoked or first distal DVT, and 182 (interquartile range 111-184) days for unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. Among patients with provoked or first distal DVT, anticoagulation was recommended for < 3 months in 11%, 3 months in 63%, and for an indefinite period in 26%. Among patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT, anticoagulation was recommended for < 6 months in 22%, 6-12 months in 38%, and for an indefinite period in 40%. Overall, there was more frequent planning of indefinite-duration therapy from hospital physicians as compared with private practice physicians (39% vs. 28%; p=0.019). Considerable inconsistency in planning the duration of anticoagulation therapy mandates an improvement in risk stratification of outpatients with acute DVT.
Resumo:
Municipalities in the United States have for the past two decades initiated two policies to reduce residential solid waste generation by increasing recycling. The first policy, implemented in over 4,000 municipalities in the United States, requires households to pay a fee for each unit of garbage presented at the curb for collection. The second policy, initiated in 8,875 municipalities, subsidizes household recycling efforts by providing free curbside collection of certain recyclable materials. Both initiatives serve as examples of incentive-based environmental policies favored by many economists. But before economists can celebrate this wide-spread adoption of incentive-based environmental policies, further examination reveals that potentially inefficient command and control policies have been more instrumental in promoting recycling than might be commonly known. This article examines the empirical lessons gained from studying twenty years of solid waste policy in the United States and argues for the replacement of several state recycling mandates with a system of state and/or national landfill taxes.
Resumo:
Decompressive craniectomy (DC) due to intractably elevated intracranial pressure mandates later cranioplasty (CP). However, the optimal timing of CP remains controversial. We therefore analyzed our prospectively conducted database concerning the timing of CP and associated post-operative complications. From October 1999 to August 2011, 280 cranioplasty procedures were performed at the authors' institution. Patients were stratified into two groups according to the time from DC to cranioplasty (early, ≤2 months, and late, >2 months). Patient characteristics, timing of CP, and CP-related complications were analyzed. Overall CP was performed early in 19% and late in 81%. The overall complication rate was 16.4%. Complications after CP included epidural or subdural hematoma (6%), wound healing disturbance (5.7%), abscess (1.4%), hygroma (1.1%), cerebrospinal fluid fistula (1.1%), and other (1.1%). Patients who underwent early CP suffered significantly more often from complications compared to patients who underwent late CP (25.9% versus 14.2%; p=0.04). Patients with ventriculoperitoneal (VP) shunt had a significantly higher rate of complications after CP compared to patients without VP shunt (p=0.007). On multivariate analysis, early CP, the presence of a VP shunt, and intracerebral hemorrhage as underlying pathology for DC, were significant predictors of post-operative complications after CP. We provide detailed data on surgical timing and complications for cranioplasty after DC. The present data suggest that patients who undergo late CP might benefit from a lower complication rate. This might influence future surgical decision making regarding optimal timing of cranioplasty.
Resumo:
Consultation is promoted throughout school psychology literature as a best practice in service delivery. This method has numerous benefits including being able to work with more students at one time, providing practitioners with preventative rather than strictly reactive strategies, and helping school professionals meet state and federal education mandates and initiatives. Despite the benefits of consultation, teachers are sometimes resistant to this process.This research studies variables hypothesized to lead to resistance (Gonzalez, Nelson, Gutkin, & Shwery, 2004) and attempts to distinguish differences between school level (elementary, middle and high school) with respect to the role played by these variables and to determine if the model used to identify students for special education services has an influence on resistance factors. Twenty-sixteachers in elementary and middle schools responded to a demographicquestionnaire and a survey developed by Gonzalez, et al. (2004). This survey measures eight variables related to resistance to consultation. No high school teachers responded to the request to participate. Results of analysis of variance indicated a significant difference in the teaching efficacy subscale with elementary teachers reporting more efficacy in teaching than middle school teachers. Results also indicate a significant difference in classroom managementefficacy with teachers who work in schools that identify students according to a Response to Intervention model reporting higher classroom management efficacy than teachers who work in schools that identify students according to a combined method of refer-test-place/RtI combination model. Implications, limitations and directions for future research are discussed.
Resumo:
A considerable portion of public lands in the United States is at risk of uncharacteristically severe wildfires due to a history of fire suppression. Wildfires already have detrimental impacts on the landscape and on communities in the wildland-urban interface (WUI) due to unnatural and overstocked forests. Strategies to mitigate wildfire risk include mechanical thinning and prescribed burning in areas with high wildfire risk. The material removed is often of little or no economic value. Woody biomass utilization (WBU) could offset the costs of hazardous fuel treatments if removed material could be used for wood products, heat, or electricity production. However, barriers due to transportation costs, removal costs, and physical constraints (such as steep slopes) hinder woody biomass utilization. Various federal and state policies attempt to overcome these barriers. WBU has the potential to aid in wildfire mitigation and meet growing state mandates for renewable energy. This research utilizes interview data from individuals involved with on-the-ground woody biomass removal and utilization to determine how federal and state policies influence woody biomass utilization. Results suggest that there is not one over-arching policy that hinders or promotes woody biomass utilization, but rather woody biomass utilization is hindered by organizational constraints related to time, cost, and quality of land management agencies’ actions. However, the use of stewardship contracting (a hybrid timber sale and service contract) shows promise for increased WBU, especially in states with favorable tax policies and renewable energy mandates. Policy recommendations to promote WBU include renewal of stewardship contracting legislations and a re-evaluation of land cover types suited for WBU. Potential future policies to consider include the indirect role of carbon dioxide emission reduction activities to promote wood energy and future impacts of air quality regulations.
Resumo:
In the U.S., many electric utility companies are offering demand-side management (DSM) programs to their customers as ways to save money and energy. However, it is challenging to compare these programs between utility companies throughout the U.S. because of the variability of state energy policies. For example, some states in the U.S. have deregulated electricity markets and others do not. In addition, utility companies within a state differ depending on ownership and size. This study examines 12 utilities’ experiences with DSM programs and compares the programs’ annual energy savings results that the selected utilities reported to the Energy Information Administration (EIA). The 2009 EIA data suggests that DSM program effectiveness is not significantly affected by electricity market deregulation or utility ownership. However, DSM programs seem to generally be more effective when administered by utilities located in states with energy savings requirements and DSM program mandates.
Resumo:
The current climate of increasing performance expectations and diminishing resources, along with innovations in evidence-based practices (EBPs), creates new dilemmas for substance abuse treatment providers, policymakers, funders, and the service delivery system. This paper describes findings from baseline interviews with representatives from 49 state substance abuse authorities (SSAs). Interviews assessed efforts aimed at facilitating EBP adoption in each state and the District of Columbia. Results suggested that SSAs are concentrating more effort on EBP implementation strategies such as education, training, and infrastructure development, and less effort on financial mechanisms, regulations, and accreditation. The majority of SSAs use EBPs as a criterion in their contracts with providers, and just over half reported that EBP use is tied to state funding. To date, Oregon remains the only state with legislation that mandates treatment expenditures for EBPs; North Carolina follows suit with legislation that requires EBP promotion within current resources.
Resumo:
The U.S. Renewable Fuel Standard mandates that by 2022, 36 billion gallons of renewable fuels must be produced on a yearly basis. Ethanol production is capped at 15 billion gallons, meaning 21 billion gallons must come from different alternative fuel sources. A viable alternative to reach the remainder of this mandate is iso-butanol. Unlike ethanol, iso-butanol does not phase separate when mixed with water, meaning it can be transported using traditional pipeline methods. Iso-butanol also has a lower oxygen content by mass, meaning it can displace more petroleum while maintaining the same oxygen concentration in the fuel blend. This research focused on studying the effects of low level alcohol fuels on marine engine emissions to assess the possibility of using iso-butanol as a replacement for ethanol. Three marine engines were used in this study, representing a wide range of what is currently in service in the United States. Two four-stroke engine and one two-stroke engine powered boats were tested in the tributaries of the Chesapeake Bay, near Annapolis, Maryland over the course of two rounds of weeklong testing in May and September. The engines were tested using a standard test cycle and emissions were sampled using constant volume sampling techniques. Specific emissions for two-stroke and four-stroke engines were compared to the baseline indolene tests. Because of the nature of the field testing, limited engine parameters were recorded. Therefore, the engine parameters analyzed aside from emissions were the operating relative air-to-fuel ratio and engine speed. Emissions trends from the baseline test to each alcohol fuel for the four-stroke engines were consistent, when analyzing a single round of testing. The same trends were not consistent when comparing separate rounds because of uncontrolled weather conditions and because the four-stroke engines operate without fuel control feedback during full load conditions. Emissions trends from the baseline test to each alcohol fuel for the two-stroke engine were consistent for all rounds of testing. This is due to the fact the engine operates open-loop, and does not provide fueling compensation when fuel composition changes. Changes in emissions with respect to the baseline for iso-butanol were consistent with changes for ethanol. It was determined iso-butanol would make a viable replacement for ethanol.
Resumo:
BACKGROUND. The high rate of reperfusion injury in clinical lung transplantation mandates significant improvements in lung preservation. Innovations should be validated using standardized and low-cost experimental models. METHODS. The model introduced here is analyzed by comparing global lung function after varying ischemic times (2, 4, 8, 16, and 24 hours). A rat double-lung block is flush-perfused, and the main pulmonary artery and left atrium are connected to the left pulmonary artery and vein of a syngeneic recipient using a T-shaped stent. With pressure side ports and incorporated flow crystals, measurement of vascular resistance and graft oxygenation can be performed. The transplant is ventilated separately, and compliance and resistance are determined. RESULTS. The increase in the ischemic interval from 2 to 24 hours caused an increase in the alveolar arterial oxygen difference from 220 +/- 20 to 600 +/- 34 mm Hg, pulmonary vascular resistance from 198 +/- 76 to 638 +/- 212 mm Hg.mL-1.min-1, and resistance to airflow from 274 +/- 50 to 712 +/- 30 cm H2O/L H2O, and a decrease in pulmonary compliance from 0.4 +/- 0.05 to 0.12 +/- 0.06 mL/cm H2O. CONCLUSIONS. This in situ, syngeneic rat lung transplantation model offers an alternative to large animal models for verification of lung preservation solutions and for modification of donor or recipient treatment regimens.
Resumo:
Whereas a non-operative approach for hemodynamically stable patients with free intraabdominal fluid in the presence of solid organ injury is generally accepted, the presence of free fluid in the abdomen without evidence of solid organ injury not only presents a challenge for the treating emergency physician but also for the surgeon in charge. Despite recent advances in imaging modalities, with multi-detector computed tomography (CT) (with or without contrast agent) usually the imaging method of choice, diagnosis and interpretation of the results remains difficult. While some studies conclude that CT is highly accurate and relatively specific at diagnosing mesenteric and hollow viscus injury, others studies deem CT to be unreliable. These differences may in part be due to the experience and the interpretation of the radiologist and/or the treating physician or surgeon.A search of the literature has made it apparent that there is no straightforward answer to the question what to do with patients with free intraabdominal fluid on CT scanning but without signs of solid organ injury. In hemodynamically unstable patients, free intraabdominal fluid in the absence of solid organ injury usually mandates immediate surgical intervention. For patients with blunt abdominal trauma and more than just a trace of free intraabdominal fluid or for patients with signs of peritonitis, the threshold for a surgical exploration - preferably by a laparoscopic approach - should be low. Based on the available information, we aim to provide the reader with an overview of the current literature with specific emphasis on diagnostic and therapeutic approaches to this problem and suggest a possible algorithm, which might help with the adequate treatment of such patients.
Resumo:
The bone marrow accommodates hematopoietic stem cells and progenitors. These cells provide an indispensible resource for replenishing the blood constituents throughout an organism’s life. A tissue with such a high turn-over rate mandates intact cycling checkpoint and apoptotic pathways to avoid inappropriate cell proliferation and ultimately the development of leukemias. p53, a major tumor suppressor, is a transcription factor that regulates cell cycle, and induces apoptosis and senescence. Mice inheriting a hypomorphic p53 allele in the absence of Mdm2, a p53 inhibitor, have elevated p53 cell cycle activity and die by postnatal day 13 due to hematopoietic failure. Hematopoiesis progresses normally during embryogenesis until it moves to the bone marrow in late development. Increased oxidative stress in the bone marrow compartment postnatally is the impediment for normal hematopoiesis via activation of p53. p53 in turn stimulates the generation of more reactive oxygen species and depletes bone marrow cellularity. Also, p53 exerts various defects on the hematopoietic niche by increasing mesenchymal lineage populations and their differentiation. Hematopoietic defects are rescued with antioxidants or when cells are cultured at low oxygen levels. Deletion of p16 partially rescues bone marrow cellularity and progenitors via a p53-independent pathway. Thus, although p53 is required to inhibit tumorigenesis, Mdm2 is required to control ROS-induced p53 levels for sustainable hematopoiesis and survival during homeostasis.