866 resultados para Limitation of Actions
Resumo:
Aim of the paper: The purpose is to gather the practices and to model the impacts of climate change on fiscal spending and revenues, responsibilities and opportunities, balance and debt related to climate change (CC). Methodology of the paper: The methodology will distinguish fiscal cost of mitigation and adaptation, besides direct and indirect costs. It will also introduce cost benefit analyses to evaluate the propensity of policy makers for action or passivity. Several scenarios will be drafted to see the different outcomes. The scenarios shall contain the possible losses in the natural and artificial environment and resources. Impacts on public budget are based on damage of income opportunities and capital/wealth/natural assets. There will be a list of actions when the fiscal correction of market failures will be necessary. Findings: There will be a summary and synthesis of estimation models on CC impacts on public finances, and morals of existing/existed budgeting practices on mitigation. The model will be based on damages (and maybe benefits) from CC, adjusted with probabilities of sce-narios and policy making propensity for action. Findings will cover the way of funding of fiscal costs. Practical use, value added: From the synthesis of model, the fiscal cost of mitigation and adaptation can be estimated for any developed, emerging and developing countries. The paper will try to reply, also, for the challenge how to harmonize fiscal and developmental sustainability.
Resumo:
Since the mid-1990s, the United States has experienced a shortage of scientists and engineers, declining numbers of students choosing these fields as majors, and low student success and retention rates in these disciplines. Learning theorists, educational researchers, and practitioners believe that learning environments can be created so that an improvement in the numbers of students who complete courses successfully could be attained (Astin, 1993; Magolda & Terenzini, n.d.; O'Banion, 1997). Learning communities do this by providing high expectations, academic and social support, feedback during the entire educational process, and involvement with faculty, other students, and the institution (Ketcheson & Levine, 1999). ^ A program evaluation of an existing learning community of science, mathematics, and engineering majors was conducted to determine the extent to which the program met its goals and was effective from faculty and student perspectives. The program provided laptop computers, peer tutors, supplemental instruction with and without computer software, small class size, opportunities for contact with specialists in selected career fields, a resource library, and Peer-Led Team Learning. During the two years the project has existed, success, retention, and next-course continuation rates were higher than in traditional courses. Faculty and student interviews indicated there were many affective accomplishments as well. ^ Success and retention rates for one learning community class ( n = 27) and one traditional class (n = 61) in chemistry were collected and compared using Pearson chi square procedures ( p = .05). No statistically significant difference was found between the two groups. Data from an open-ended student survey about how specific elements of their course experiences contributed to success and persistence were analyzed by coding the responses and comparing the learning community and traditional classes. Substantial differences were found in their perceptions about the lecture, the lab, other supports used for the course, contact with other students, helping them reach their potential, and their recommendation about the course to others. Because of the limitation of small sample size, these differences are reported in descriptive terms. ^
Resumo:
This study describes the case of private higher education in Ohio between 1980 and 2006 using Zumeta's (1996) model of state policy and private higher education. More specifically, this study used case study methodology and multiple sources to demonstrate the usefulness of Zumeta's model and illustrate its limitations. Ohio served as the subject state and data for 67 private, 4-year, degree-granting, Higher Learning Commission-accredited institutions were collected. Data sources for this study included the National Center for Education Statistics Integrated Postsecondary Data System as well as database information and documents from various state agencies in Ohio, including the Ohio Board of Regents. ^ The findings of this study indicated that the general state context for higher education in Ohio during the study time period was shaped by deteriorating economic factors, stagnating population growth coupled with a rapidly aging society, fluctuating state income and increasing expenditures in areas such as corrections, transportation and social services. However, private higher education experienced consistent enrollment growth, an increase in the number of institutions, widening involvement in state-wide planning for higher education, and greater fiscal support from the state in a variety of forms such as the Ohio Choice Grant. This study also demonstrated that private higher education in Ohio benefited because of its inclusion in state-wide planning and the state's decision to grant state aid directly to students. ^ Taken together, this study supported Zumeta's (1996) classification of Ohio as having a hybrid market-competitive/central-planning policy posture toward private higher education. Furthermore, this study demonstrated that Zumeta's model is a useful tool for both policy makers and researchers for understanding a state's relationship to its private higher education sector. However, this study also demonstrated that Zumeta's model is less useful when applied over an extended time period. Additionally, this study identifies a further limitation of Zumeta's model resulting from his failure to define "state mandate" and the "level of state mandates" that allows for inconsistent analysis of this component. ^
Resumo:
Patterns of relative nutrient availability in south Florida suggest spatial differences regarding the importance of nitrogen (N) and phosphorus (P) to benthic primary producers. We did a 14-month in situ fertilization experiment to test predictions of N and P limitation in the subtropical nearshore marine waters of the upper Florida Keys. Six sites were divided into two groups (nearshore, offshore) representing the endpoints of an N: P stoichiometric gradient. Twenty-four plots were established at each site with six replicates of each treatment (1N, 1P, 1N1P, control), for a total of 144 experimental plots. The responses of benthic communities to N and P enrichment varied appreciably between nearshore and offshore habitats. Offshore seagrass beds were strongly limited by nitrogen, and nearshore beds were affected by nitrogen and phosphorus. Nutrient addition at offshore sites increased the length and aboveground standing crop of the two seagrasses, Thalassia testudinum and Syringodium filiforme, and growth rates of T. testudinum. Nutrient addition at nearshore sites increased the relative abundance of macroalgae, epiphytes, and sediment microalgae. N limitation of seagrass in this carbonate system was clearly demonstrated. However, added phosphorus was retained in the system more effectively than N, suggesting that phosphorus might have important long-term effects on these benthic communities. The observed species-specific responses to nutrient enrichment underscores the need to monitor all primary producers when addressing questions of nutrient limitation and eutrophication in seagrass communities.
Resumo:
The use of computer assisted instruction (CAI) simulations as an instructional strategy provides nursing students with a critical thinking approach for evaluating risks and benefits and choosing correct alternatives in "safe" patient care situations. It was hypothesized that using CAI simulations during an upper level nursing review course would have a positive effect on the students' posttest scores. Subjects (n = 36) were senior nursing students enrolled in a nursing review course in an undergraduate baccalaureate program. A limitation of the study was the small sample size. The study employed a modified group experimental design using the t test for independent samples. The group who received the CAI simulations during the physiological system review demonstrated a significant increase (p $<$.01) in the posttest score mean when compared to the lecture-discussion group score mean. There was no significant difference between high and low clinical grade point average (GPA) students in the CAI and lecture-discussion groups and their score means on the posttest. However, score mean differences of the low clinical GPA students showed a greater increase for the CAI group than the lecture-discussion group. There was no significant difference between the groups in their system content subscore means on the exit examination completed three weeks later. It was concluded that CAI simulations are as effective as lecture-discussion in assisting upper level students to process information for clinical decision making. CAI simulations can be considered as an instructional strategy to supplement or replace lecture content during a review course, allowing more efficient use of faculty time. It is recommended that the study be repeated using a larger sample size. Further investigations are recommended in comparing the effectiveness of computer software formats and various instructional strategies for other learning situations and student populations. ^
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^
Resumo:
Distributed Generation (DG) from alternate sources and smart grid technologies represent good solutions for the increase in energy demands. Employment of these DG assets requires solutions for the new technical challenges that are accompanied by the integration and interconnection into operational power systems. A DG infrastructure comprised of alternate energy sources in addition to conventional sources, is developed as a test bed. The test bed is operated by synchronizing, wind, photovoltaic, fuel cell, micro generator and energy storage assets, in addition to standard AC generators. Connectivity of these DG assets is tested for viability and for their operational characteristics. The control and communication layers for dynamic operations are developed to improve the connectivity of alternates to the power system. A real time application for the operation of alternate sources in microgrids is developed. Multi agent approach is utilized to improve stability and sequences of actions for black start are implemented. Experiments for control and stability issues related to dynamic operation under load conditions have been conducted and verified.
Resumo:
The objective of this study was to fundamentally characterize the laboratory performance of traditional hot mix asphalt (HMA) mixtures incorporating high RAP content and waste tire crumb rubber through their fundamental engineering properties. The nominal maximum aggregates size was chosen for this research was 12mm (considering the limitation of aggregate size for surface layer) and both coarse and fine aggregates are commonly used in Italy that were examined and analyzed in this study. On the other hand, the RAP plays an important role in reducing production costs and enhancing the environmentally sustainable pavements instead of using virgin materials in HMA. Particularly, this study has aimed to use 30% of RAP content (25% fine aggregate RAP and 5% coarse aggregate RAP) and 1% of CR additives by the total weight of aggregates for mix design. The mixture of aggregates, RAP and CR were blended with different amount of unmodified binder through dry processes. Generally, the main purposes of this study were investigating on capability of using RAP and CR in dense graded HMA and comparing the performance of rejuvenator in RAP with CR. In addition, based on the engineering analyses during the study, we were able compare the fundamental Indirect Tensile Strength (ITS) value of dense graded HMA and also mechanical characteristics in terms of Indirect Tensile Stiffness Modulus (ITSM). In order to get an extended comparable data, four groups of different mixtures such as conventional mixture with only virgin aggregates (DV), mixture with RAP (DR), mixture with RAP and rejuvenator (DRR), and mixture with RAP, rejuvenator, CR (DRRCr) were investigated in this research experimentally. Finally, the results of those tests indicated that the mixtures with RAP and CR had the high stiffness and less thermal sensitivity, while the mixture with virgin aggregates only had very low values in comparison.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
Organic matter contained in surface sediments from four regions on the western Portuguese shelf, which are influenced by coastal upwelling and fluvial input, was analysed with respect to elemental organic carbon (Corg) and nitrogen (Ntotal) content and isotopic carbon and nitrogen ratios (d13Corg, d15N). Corg/Ntotal weight ratios and d13Corg values are interpreted in terms of terrigenous or marine organic matter sources, supported by CaCO3 content. Organic matter in the shelf sediments is mainly of marine origin, with increasing terrigenous components only close to rivers and estuaries. In the northern shelf region the data indicates significant terrigenous supply by the Douro River. North of the Nazaré Canyon organic matter composition implies a mainly marine origin, with a higher terrestrial influence close to the canyon head. Organic matter composition in the central shelf region, which is dominated by the Tagus Estuary and the Tagus prodelta, reveals a change from a continental-type signature within the estuary to a more marine-type signature further to the west and south of the estuary mouth. In the southern region near Cape Sines the geochemical properties clearly reflect the marine origin of sedimentary organic matter. Sedimentary d15N values are interpreted to reflect various degrees of assimilation of seasonally upwelled nitrate, in relation to the upwelling centres. In the estuarine environment, inputs of agriculturally influenced dissolved inorganic nitrogen are reflected in the sediments. No evidence for N2-fixation or denitrification is found. On the central shelf north of the Nazaré canyon, sedimentary d15N values are close to marine d15NO3- and thus indicate a complete NO3- assimilation and N-limitation of marine production. Light d15N values in distal sediments off the Douro River mouth and in samples south of C. Sines reflect high NO3- supply and a close proximity to the seasonal upwelling centres. Particularly in sediments form the Sines region, light d15N values in southern samples reflect stronger upwelling further south.
Resumo:
Parameters of provision of the phytoplankton community with inorganic nitrogen compounds in the western Black Sea in April 1993 are analyzed (specifically, dependence of rates of uptake of nitrates and ammonium by microplankton on substrate concentration, diurnal dynamics of assimilation of mineral nitrogen, values of f-ratios, and proportions of carbon and nitrogen fluxes). In most cases all the parameters of degree of phytoplankton provision with mineral nitrogen are shown to vary unidirectionally, both at the surface and in the photosynthesis zone. Individual areas of a relatively small region studied differed markedly in their level of provision of algae with inorganic nitrogen compounds - from complete saturation to high degree of limitation of phytoplankton development due to nitrogen deficiency in the environment. Obtained results allow to estimate provision of Black Sea phytoplankton with nitrogen in terms of limitation of rates of uptake of its inorganic compounds.
Resumo:
The ongoing process of ocean acidification already affects marine life and, according to the concept of oxygen- and capacity limitation of thermal tolerance (OCLTT), these effects may be exacerbated at the boarders of the thermal tolerance window. We studied the effects of elevated CO2 concentrations on clapping performance and energy metabolism of the commercially important scallop Pecten maximus. Individuals were exposed for at least 30 days to 4°C (winter) or to 10°C (spring/summer) at either ambient (0.04 kPa, normocapnia) or predicted future PCO2 levels (0.11 kPa, hypercapnia). Cold (4°C) exposed groups revealed thermal stress exacerbated by PCO2 indicated by a high mortality overall and its increase from 55% under normocapnia to 90% under hypercapnia. We therefore excluded the 4°C groups from further experimentation. Scallops at 10°C showed impaired clapping performance following hypercapnic exposure. Force production was significantly reduced although the number of claps was unchanged between normo- and hypercapnia exposed scallops. The difference between maximal and resting metabolic rate (aerobic scope) of the hypercapnic scallops was significantly reduced compared to normocapnic animals, indicating a reduction in net aerobic scope. Our data confirm that ocean acidification narrows the thermal tolerance range of scallops resulting in elevated vulnerability to temperature extremes and impairs the animal's performance capacity with potentially detrimental consequences for its fitness and survival in the ocean of tomorrow.
Resumo:
Levinas’s reflections arose as a critique of traditional philosophy which, since it was based on presence and identity, leads to the exclusion of the other. Instead of an onto-logical thought the Lithuanian proposes that the ipseity of the human being be constituted by alterity, and that it be so ethically, because the subject is sub-ject, that is, that which upholds, responsibility. In an attempt to take the obligatory attention to the otherness of the other even further, Derrida would develop a radical critique of the Levinasian posture. Deconstruction of every trace of ipseity and sovereignty in the relationship with the other, the reading that we have done of the work of Derrida opts for a no definable understanding of the human. That is why every de-limitation of an ethical field as a properly human implies a brutal violence that the levinasian humanism of the other tried to exceed.
Resumo:
This paper examines what types of actions undertaken by patent holders have been considered as abusive in the framework of French and Belgian patent litigation. Particular attention is given to the principle of the prohibition of “abuse of rights” (AoR). In the jurisdictions under scrutiny, the principle of AoR is essentially a jurisprudential construction in cases where judges faced a particular set of circumstances for which no codified rules were available. To investigate how judges deal with the prohibition of AoR in patent litigation and taking into account the jurisprudential nature of the principle, an in-depth and comparative case law analysis has been conducted. Although the number of cases in which patent holders have been sanctioned for such abuses is not overabundant, they do provide sufficient leads on what is understood by Belgian and French courts to constitute an abuse of patent rights. From this comparative analysis, useful lessons can be learned for the interpretation of the ambiguous notion of ‘abuse’ from a broader perspective.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.