914 resultados para Mitigate
Resumo:
Improvements in on-farm water and soil fertility management through water harvesting may prove key to up-grade smallholder farming systems in dry sub-humid and semi-arid sub-Sahara Africa (SSA). The currently experienced yield levels are usually less than 1 t ha-1, i.e., 3-5 times lower than potential levels obtained by commercial farmers and researchers for similar agro-hydrological conditions. The low yield levels are ascribed to the poor crop water availability due to variable rainfall, losses in on-farm water balance and inherently low soil nutrient levels. To meet an increased food demand with less use of water and land in the region, requires farming systems that provide more yields per water unit and/or land area in the future. This thesis presents the results of a project on water harvesting system aiming to upgrade currently practised water management for maize (Zea mays, L.) in semi-arid SSA. The objectives were to a) quantify dry spell occurrence and potential impact in currently practised small-holder grain production systems, b) test agro-hydrological viability and compare maize yields in an on-farm experiment using combinations supplemental irrigation (SI) and fertilizers for maize, and c) estimate long-term changes in water balance and grain yields of a system with SI compared to farmers currently practised in-situ water harvesting. Water balance changes and crop growth were simulated in a 20-year perspective with models MAIZE1&2. Dry spell analyses showed that potentially yield-limiting dry spells occur at least 75% of seasons for 2 locations in semi-arid East Africa during a 20-year period. Dry spell occurrence was more frequent for crop cultivated on soil with low water-holding capacity than on high water-holding capacity. The analysis indicated large on-farm water losses as deep percolation and run-off during seasons despite seasonal crop water deficits. An on-farm experiment was set up during 1998-2001 in Machakos district, semi-arid Kenya. Surface run-off was collected and stored in a 300m3 earth dam. Gravity-fed supplemental irrigation was carried out to a maize field downstream of the dam. Combinations of no irrigation (NI), SI and 3 levels of N fertilizers (0, 30, 80 kg N ha-1) were applied. Over 5 seasons with rainfall ranging from 200 to 550 mm, the crop with SI and low nitrogen fertilizer gave 40% higher yields (**) than the farmers’ conventional in-situ water harvesting system. Adding only SI or only low nitrogen did not result in significantly different yields. Accounting for actual ability of a storage system and SI to mitigate dry spells, it was estimated that a farmer would make economic returns (after deduction of household consumption) between year 2-7 after investment in dam construction depending on dam sealant and labour cost used. Simulating maize growth and site water balance in a system of maize with SI increased annual grain yield with 35 % as a result of timely applications of SI. Field water balance changes in actual evapotranspiration (ETa) and deep percolation were insignificant with SI, although the absolute amount of ETa increased with 30 mm y-1 for crop with SI compared to NI. The dam water balance showed 30% productive outtake as SI of harvested water. Large losses due to seepage and spill-flow occurred from the dam. Water productivity (WP, of ETa) for maize with SI was on average 1 796 m3 per ton grain, and for maize without SI 2 254 m3 per ton grain, i.e, a decerase of WP with 25%. The water harvesting system for supplemental irrigation of maize was shown to be both biophysically and economically viable. However, adoption by farmers will depend on other factors, including investment capacity, know-how and legislative possibilities. Viability of increased water harvesting implementation in a catchment scale needs to be assessed so that other down-stream uses of water remains uncompromised.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
[EN] In this work, we describe an implementation of the variational method proposed by Brox et al. in 2004, which yields accurate optical flows with low running times. It has several benefits with respect to the method of Horn and Schunck: it is more robust to the presence of outliers, produces piecewise-smooth flow fields and can cope with constant brightness changes. This method relies on the brightness and gradient constancy assumptions, using the information of the image intensities and the image gradients to find correspondences. It also generalizes the use of continuous L1 functionals, which help mitigate the efect of outliers and create a Total Variation (TV) regularization. Additionally, it introduces a simple temporal regularization scheme that enforces a continuous temporal coherence of the flow fields.
Resumo:
Habitat loss and fragmentation have a prominent role in determining the size of plant populations, and can affect plant-pollinator interactions. It is hypothesized that in small plant populations the ability to set seeds can be reduced due to limited pollination services, since individuals in small populations can receive less quantity or quality of visits. In this study, I investigated the effect of population size on plant reproductive success and insect visitation in 8 populations of two common species in the island of Lesvos, Greece (Mediterranean Sea), Echium plantagineum and Ballota acetabulosa, and of a rare perennial shrub endemic to north-central Italy, Ononis masquillierii. All the three species depended on insect pollinators for sexual reproduction. For each species, pollen limitation was present in all or nearly all populations, but the relationship between pollen limitation and population size was only present in Ononis masquillierii. However, in Echium plantagineum, significant relationships between both open-pollinated and handcrossed-pollinated seed sets and population size were found, being small populations comparatively less productive than large ones. Additionally, for this species, livestock grazing intensity was greater for small populations and for sparse patches, and had a negative influence on productivity of the remnant plants. Both Echium plantagineum and Ballota acetabulosa attracted a great number of insects, representing a wide spectrum of pollinators, thereby can be considered as generalist species. For Ballota acetabulosa, the most important pollinators were megachilid female bees, and insect diversity didn’t decrease with decreasing plant population size. By contrast, Ononis masquillierii plants generally received few visits, with flowers specialized on small bees (Lasioglossum spp.), representing the most important insect guild. In Echium plantagineum and Ballota acetabulosa, plants in small and large populations received the same amount of visits per flower, and no differences in the number of intraplant visited flowers were detected. On the contrary, large Ononis populations supported higher amounts of pollinators than small ones. At patch level, high Echium flower density was associated with more and higher quality pollinators. My results indicate that small populations were not subject to reduced pollination services than large ones in Echium plantagineum and Ballota acetabulosa, and suggest that grazing and resource limitation could have a major impact on population fitness in Echium plantagineum. The absence of any size effects in these two species can be explained in the light of their high local abundance, wide habitat specificity, and ability to compete with other co-flowering species for pollinators. By contrast, size represents a key characteristic for both pollination and reproduction in Ononis masquillierii populations, as an increase in size could mitigate the negative effects coming from the disadvantageous reproductive traits of the species. Finally, the widespread occurrence of pollen limitation in the three species may be the result of 1) an ongoing weakening or disruption of plantpollinator interactions derived from ecological perturbations, 2) an adaptive equilibrium in response to stochastic processes, and 3) the presence of unfavourable reproductive traits (for Ononis masquillierii).
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
One of the most important problems in inertial confinement fusion is how to find a way to mitigate the onset of the Rayleigh-Taylor instability which arises in the ablation front during the compression. In this thesis it is studied in detail the possibility of using for such a purpose the well-known mechanism of dynamic stabilization, already applied to other dynamical systems such as the inverted pendulum. In this context, a periodic acceleration superposed to the background gravity generates a vertical vibration of the ablation front itself. The effects of different driving modulations (Dirac deltas and square waves) are analyzed from a theoretical point of view, with a focus on stabilization of ion beam driven ablation fronts, and a comparison is made, in order to look for optimization.
Resumo:
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
The increase in environmental and healthy concerns, combined with the possibility to exploit waste as a valuable energy resource, has led to explore alternative methods for waste final disposal. In this context, the energy conversion of Municipal Solid Waste (MSW) in Waste-To-Energy (WTE) power plant is increasing throughout Europe, both in terms of plants number and capacity, furthered by legislative directives. Due to the heterogeneous nature of waste, some differences with respect to a conventional fossil fuel power plant have to be considered in the energy conversion process. In fact, as a consequence of the well-known corrosion problems, the thermodynamic efficiency of WTE power plants typically ranging in the interval 25% ÷ 30%. The new Waste Framework Directive 2008/98/EC promotes production of energy from waste introducing an energy efficiency criteria (the so-called “R1 formula”) to evaluate plant recovery status. The aim of the Directive is to drive WTE facilities to maximize energy recovery and utilization of waste heat, in order to substitute energy produced with conventional fossil fuels fired power plants. This calls for novel approaches and possibilities to maximize the conversion of MSW into energy. In particular, the idea of an integrated configuration made up of a WTE and a Gas Turbine (GT) originates, driven by the desire to eliminate or, at least, mitigate limitations affecting the WTE conversion process bounding the thermodynamic efficiency of the cycle. The aim of this Ph.D thesis is to investigate, from a thermodynamic point of view, the integrated WTE-GT system sharing the steam cycle, sharing the flue gas paths or combining both ways. The carried out analysis investigates and defines the logic governing plants match in terms of steam production and steam turbine power output as function of the thermal powers introduced.
Resumo:
The arid regions are dominated to a much larger degree than humid regions by major catastrophic events. Although most of Egypt lies within the great hot desert belt; it experiences especially in the north some torrential rainfall, which causes flash floods all over Sinai Peninsula. Flash floods in hot deserts are characterized by high velocity and low duration with a sharp discharge peak. Large sediment loads may be carried by floods threatening fields and settlements in the wadis and even people who are living there. The extreme spottiness of rare heavy rainfall, well known to desert people everywhere, precludes any efficient forecasting. Thus, although the limitation of data still reflects pre-satellite methods, chances of developing a warning system for floods in the desert seem remote. The relatively short flood-to-peak interval, a characteristic of desert floods, presents an additional impediment to the efficient use of warning systems. The present thesis contains introduction and five chapters, chapter one points out the physical settings of the study area. There are the geological settings such as outcrop lithology of the study area and the deposits. The alluvial deposits of Wadi Moreikh had been analyzed using OSL dating to know deposits and palaeoclimatic conditions. The chapter points out as well the stratigraphy and the structure geology containing main faults and folds. In addition, it manifests the pesent climate conditions such as temperature, humidity, wind and evaporation. Besides, it presents type of soils and natural vegetation cover of the study area using unsupervised classification for ETM+ images. Chapter two points out the morphometric analysis of the main basins and their drainage network in the study area. It is divided into three parts: The first part manifests the morphometric analysis of the drainage networks which had been extracted from two main sources, topographic maps and DEM images. Basins and drainage networks are considered as major influencing factors on the flash floods; Most of elements were studied which affect the network such as stream order, bifurcation ratio, stream lengths, stream frequency, drainage density, and drainage patterns. The second part of this chapter shows the morphometric analysis of basins such as area, dimensions, shape and surface. Whereas, the third part points the morphometric analysis of alluvial fans which form most of El-Qaá plain. Chapter three manifests the surface runoff through rainfall and losses analysis. The main subject in this chapter is rainfall which has been studied in detail; it is the main reason for runoff. Therefore, all rainfall characteristics are regarded here such as rainfall types, distribution, rainfall intensity, duration, frequency, and the relationship between rainfall and runoff. While the second part of this chapter concerns with water losses estimation by evaporation and infiltration which are together the main losses with direct effect on the high of runoff. Finally, chapter three points out the factors influencing desert runoff and runoff generation mechanism. Chapter four is concerned with assessment of flood hazard, it is important to estimate runoff and tocreate a map of affected areas. Therefore, the chapter consists of four main parts; first part manifests the runoff estimation, the different methods to estimate runoff and its variables such as runoff coefficient lag time, time of concentration, runoff volume, and frequency analysis of flash flood. While the second part points out the extreme event analysis. The third part shows the map of affected areas for every basin and the flash floods degrees. In this point, it has been depending on the DEM to extract the drainage networks and to determine the main streams which are normally more dangerous than others. Finally, part four presets the risk zone map of total study area which is of high inerest for planning activities. Chapter five as the last chapter concerns with flash flood Hazard mitigation. It consists of three main parts. First flood prediction and the method which can be used to predict and forecast the flood. The second part aims to determine the best methods which can be helpful to mitigate flood hazard in the arid zone and especially the study area. Whereas, the third part points out the development perspective for the study area indicating the suitable places in El-Qaá plain for using in economic activities.
Resumo:
The study aims at providing a framework conceptualizing patenting activities under the condition of intellectual property rights fragmentation. Such a framework has to deal with the interrelated problems of technological complexity in the modern patent landscape. In that respect, ex-post licensing agreements have been incorporated into the analysis. More precisely, by consolidating the right to use patents required for commercialization of a product, private market solutions, such as cross-licensing agreements and patent pools help firms to overcome problems triggered by the intellectual property rights fragmentation. Thereby, private bargaining between parties as such cannot be isolated from the legal framework. A result of this analysis is that policies ignoring market solutions and only focusing on static gains can mitigate the dynamic efficiency gains as induced by the patent system. The evidence found in this thesis supports the opinion that legal reforms that aim to decrease the degree of patent protection or to lift it all together can hamper the functioning of the current system.
Resumo:
MFA and LCA methodologies were applied to analyse the anthropogenic aluminium cycle in Italy with focus on historical evolution of stocks and flows of the metal, embodied GHG emissions, and potentials from recycling to provide key features to Italy for prioritizing industrial policy toward low-carbon technologies and materials. Historical trend series were collected from 1947 to 2009 and balanced with data from production, manufacturing and waste management of aluminium-containing products, using a ‘top-down’ approach to quantify the contemporary in-use stock of the metal, and helping to identify ‘applications where aluminium is not yet being recycled to its full potential and to identify present and future recycling flows’. The MFA results were used as a basis for the LCA aimed at evaluating the carbon footprint evolution, from primary and electrical energy, the smelting process and the transportation, embodied in the Italian aluminium. A discussion about how the main factors, according to the Kaya Identity equation, they did influence the Italian GHG emissions pattern over time, and which are the levers to mitigate it, it has been also reported. The contemporary anthropogenic reservoirs of aluminium was estimated at about 320 kg per capita, mainly embedded within the transportation and building and construction sectors. Cumulative in-use stock represents approximately 11 years of supply at current usage rates (about 20 Mt versus 1.7 Mt/year), and it would imply a potential of about 160 Mt of CO2eq emissions savings. A discussion of criticality related to aluminium waste recovery from the transportation and the containers and packaging sectors was also included in the study, providing an example for how MFA and LCA may support decision-making at sectorial or regional level. The research constitutes the first attempt of an integrated approach between MFA and LCA applied to the aluminium cycle in Italy.
Resumo:
This Doctoral Thesis unfolds into a collection of three distinct papers that share an interest in institutional theory and technology transfer. Taking into account that organizations are increasingly exposed to a multiplicity of demands and pressures, we aim to analyze what renders this situation of institutional complexity more or less difficult to manage for organizations, and what makes organizations more or less successful in responding to it. The three studies offer a novel contribution both theoretically and empirically. In particular, the first paper “The dimensions of organizational fields for understanding institutional complexity: A theoretical framework” is a theoretical contribution that tries to better understand the relationship between institutional complexity and fields by providing a framework. The second article “Beyond institutional complexity: The case of different organizational successes in confronting multiple institutional logics” is an empirical study which aims to explore the strategies that allow organizations facing multiple logics to respond more successfully to them. The third work “ How external support may mitigate the barriers to university-industry collaboration” is oriented towards practitioners and presents a case study about technology transfer in Italy.
Resumo:
This research was designed to answer the question of which direction the restructuring of financial regulators should take – consolidation or fragmentation. This research began by examining the need for financial regulation and its related costs. It then continued to describe what types of regulatory structures exist in the world; surveying the regulatory structures in 15 jurisdictions, comparing them and discussing their strengths and weaknesses. This research analyzed the possible regulatory structures using three methodological tools: Game-Theory, Institutional-Design, and Network-Effects. The incentives for regulatory action were examined in Chapter Four using game theory concepts. This chapter predicted how two regulators with overlapping supervisory mandates will behave in two different states of the world (where they can stand to benefit from regulating and where they stand to lose). The insights derived from the games described in this chapter were then used to analyze the different supervisory models that exist in the world. The problem of information-flow was discussed in Chapter Five using tools from institutional design. The idea is based on the need for the right kind of information to reach the hands of the decision maker in the shortest time possible in order to predict, mitigate or stop a financial crisis from occurring. Network effects and congestion in the context of financial regulation were discussed in Chapter Six which applied the literature referring to network effects in general in an attempt to conclude whether consolidating financial regulatory standards on a global level might also yield other positive network effects. Returning to the main research question, this research concluded that in general the fragmented model should be preferable to the consolidated model in most cases as it allows for greater diversity and information-flow. However, in cases in which close cooperation between two authorities is essential, the consolidated model should be used.