80 resultados para landslide hazard
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.
Resumo:
This book is one out of 8 IAEG XII Congress volumes, and deals with Landslide processes, including: field data and monitoring techniques, prediction and forecasting of landslide occurrence, regional landslide inventories and dating studies, modeling of slope instabilities and secondary hazards (e.g. impulse waves and landslide-induced tsunamis, landslide dam failures and breaching), hazard and risk assessment, earthquake and rainfall induced landslides, instabilities of volcanic edifices, remedial works and mitigation measures, development of innovative stabilization techniques and applicability to specific engineering geological conditions, use of geophysical techniques for landslide characterization and investigation of triggering mechanisms. Focuses is given to innovative techniques, well documented case studies in different environments, critical components of engineering geological and geotechnical investigations, hydrological and hydrogeological investigations, remote sensing and geophysical techniques, modeling of triggering, collapse, runout and landslide reactivation, geotechnical design and construction procedures in landslide zones, interaction of landslides with structures and infrastructures and possibility of domino effects. The Engineering Geology for Society and Territory volumes of the IAEG XII Congress held in Torino from September 15-19, 2014, analyze the dynamic role of engineering geology in our changing world and build on the four main themes of the congress: environment, processes, issues, and approaches.
Resumo:
I analyze, in the context of business and science research collaboration, how the characteristics of partnership agreements are the result of an optimal contract between partners. The final outcome depends on the structure governing the partnership, and on the informational problems towards the efforts involved. The positive effect that the effort of each party has on the success of the other party, makes collaboration a preferred solution. Divergence in research goals may, however, create conflicts between partners. This paper shows how two different structures of partnership governance (a centralized, and a decentralized ones) may optimally use the type of project to motivate the supply of non-contractible efforts. Decentralized structure, however, always choose a project closer to its own preferences. Incentives may also come from monetary transfers, either from partners sharing each other benefits, or from public funds. I derive conditions under which public interventio
Resumo:
This paper examines competition between generic and brand-name drugs in the regulated Spanish pharmaceutical market. A nested logit demand model is specified for the three most consumed therapeutic subgroups in Spain: statins (anticholesterol), selective serotonin reuptake inhibitors (antidepressants) and proton pump inhibitors (antiulcers). The model is estimated with instrumental variables from a panel of monthly prescription data from 1999 to 2005. The dataset distinguishes between three different levels of patients’ copayments within the prescriptions and the results show that the greater the level of insurance that the patient has (and therefore the lower the patient’s copayment), the lower the proportion of generic prescriptions made by physicians. It seems that the low level of copayment has delayed the penetration of generics into the Spanish market. Additionally, the estimation of the demand model suggests that the substitution rules and promotional efforts associated with the reference pricing system have increased generic market share, and that being among the first generic entrants has an additional positive effect.
Resumo:
We examine the conditions under which competitive equilibria can beobtained as the limit, when the number of strategic traders getslarge, of Nash equilibria in economies with asymmetric informationon agents' effort and possibly imperfect observability of agents'trades. Convergence always occur when either effort is publiclyobserved (no matter what is the information available tointermediaries on agents' trades); or effort is private informationbut agents' trades are perfectly observed; or no information at allis available on agents' trades. On the other hand, when eachintermediary can observe its trades with an agent, but not theagent's trades with other intermediaries, the (Nash) equilibriawith strategic intermediaries do not converge to any of thecompetitive equilibria, for an open set of economies. The source ofthe difficulties for convergence is the combination of asymmetricinformation and the restrictions on the observability of tradeswhich prevent the formation of exclusive contractual relationshipsand generate barriers to entry in the markets for contracts.
Resumo:
This paper studies equilibria for economies characterized by moral hazard(hidden action), in which the set of contracts marketed in equilibrium isdetermined by the interaction of financial intermediaries.The crucial aspect of the environment that we study is thatintermediaries are restricted to trade non-exclusive contracts: theagents' contractual relationships with competing intermediaries cannot bemonitored (or are not contractible upon). We fully characterize equilibrium allocations and contracts. In thisset-up equilibrium allocations are clearly incentive constrainedinefficient. A robust property of equilibria with non-exclusivity isthat the contracts issued in equilibrium do not implement the optimalaction. Moreover we prove that, whenever equilibrium contracts doimplement the optimal action, intermediaries make positive profits andequilibrium allocations are third best inefficient (where the definitionof third best efficiency accounts for constraints which capture thenon-exclusivity of contracts).
Resumo:
In this paper, I analyze the ownership dynamics of N strategic risk-averse corporate insiders facing a moral hazard problem. A solution for the equilibrium share price and the dynamics of the aggregate insider stake is obtained in two cases: when agents can crediblycommit to an optimal ownership policy and when they cannot commit (time-consistent case). Inthe latter case, the aggregate stake gradually adjusts towards the competitive allocation. The speed of adjustment increases with N when outside investors are risk-averse, and does not depend on it when investors are risk-neutral. Predictions of the model are consistent with recent empirical findings.
Resumo:
The landslide of Rosiana is considered the largest slope movement amongst those known in historical times in Gran Canana, Canary Islands. It has been activated at least 4 times in the last century, and in the movement of 1956, when about 3.106 m3 of materials were involved, 250 people had to be evacuated and many buildings were destroyed. The present geological hazard has lead to specific studies of the phenomenon which, once characterised, can be used as a guide for the scientific and technical works that are to be made in this or similar areas. This paper wants to increase the knowledge about the unstable mass of Rosiana by using geophysical techniques based on the method of seismic by refraction. The geophysical measues have been interpreted with the aid of the available geomorphologic data, thus obtaining a first approximation to the geometry of the slope movements
Resumo:
This work deals with the elaboration of flood hazard maps. These maps reflect the areas prone to floods based on the effects of Hurricane Mitch in the Municipality of Jucuarán of El Salvador. Stream channels located in the coastal range in the SE of El Salvador flow into the Pacific Ocean and generate alluvial fans. Communities often inhabit these fans can be affected by floods. The geomorphology of these stream basins is associated with small areas, steep slopes, well developed regolite and extensive deforestation. These features play a key role in the generation of flash-floods. This zone lacks comprehensive rainfall data and gauging stations. The most detailed topographic maps are on a scale of 1:25 000. Given that the scale was not sufficiently detailed, we used aerial photographs enlarged to the scale of 1:8000. The effects of Hurricane Mitch mapped on these photographs were regarded as the reference event. Flood maps have a dual purpose (1) community emergency plans, (2) regional land use planning carried out by local authorities. The geomorphological method is based on mapping the geomorphological evidence (alluvial fans, preferential stream channels, erosion and sedimentation, man-made terraces). Following the interpretation of the photographs this information was validated on the field and complemented by eyewitness reports such as the height of water and flow typology. In addition, community workshops were organized to obtain information about the evolution and the impact of the phenomena. The superimposition of this information enables us to obtain a comprehensive geomorphological map. Another aim of the study was the calculation of the peak discharge using the Manning and the paleohydraulic methods and estimates based on geomorphologic criterion. The results were compared with those obtained using the rational method. Significant differences in the order of magnitude of the calculated discharges were noted. The rational method underestimated the results owing to short and discontinuous periods of rainfall data with the result that probabilistic equations cannot be applied. The Manning method yields a wide range of results because of its dependence on the roughness coefficient. The paleohydraulic method yielded higher values than the rational and Manning methods. However, it should be pointed out that it is possible that bigger boulders could have been moved had they existed. These discharge values are lower than those obtained by the geomorphological estimates, i.e. much closer to reality. The flood hazard maps were derived from the comprehensive geomorphological map. Three categories of hazard were established (very high, high and moderate) using flood energy, water height and velocity flow deduced from geomorphological and eyewitness reports.
Resumo:
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0
Resumo:
Daily precipitation is recorded as the total amount of water collected by a rain-gauge in 24h. Events are modelled as a Poisson process and the 24h precipitation by a Generalized Pareto Distribution (GPD) of excesses. Hazard assessment is complete when estimates of the Poisson rate and the distribution parameters, together with a measure of their uncertainty, are obtained. The shape parameter of the GPD determines the support of the variable: Weibull domain of attraction (DA) corresponds to finite support variables, as should be for natural phenomena. However, Fréchet DA has been reported for daily precipitation, which implies an infinite support and a heavy-tailed distribution. We use the fact that a log-scale is better suited to the type of variable analyzed to overcome this inconsistency, thus showing that using the appropriate natural scale can be extremely important for proper hazard assessment. The approach is illustrated with precipitation data from the Eastern coast of the Iberian Peninsula affected by severe convective precipitation. The estimation is carried out by using Bayesian techniques
Resumo:
Daily precipitation is recorded as the total amount of water collected by a rain-gauge in 24 h. Events are modelled as a Poisson process and the 24 h precipitation by a Generalised Pareto Distribution (GPD) of excesses. Hazard assessment is complete when estimates of the Poisson rate and the distribution parameters, together with a measure of their uncertainty, are obtained. The shape parameter of the GPD determines the support of the variable: Weibull domain of attraction (DA) corresponds to finite support variables as should be for natural phenomena. However, Fréchet DA has been reported for daily precipitation, which implies an infinite support and a heavy-tailed distribution. Bayesian techniques are used to estimate the parameters. The approach is illustrated with precipitation data from the Eastern coast of the Iberian Peninsula affected by severe convective precipitation. The estimated GPD is mainly in the Fréchet DA, something incompatible with the common sense assumption of that precipitation is a bounded phenomenon. The bounded character of precipitation is then taken as a priori hypothesis. Consistency of this hypothesis with the data is checked in two cases: using the raw-data (in mm) and using log-transformed data. As expected, a Bayesian model checking clearly rejects the model in the raw-data case. However, log-transformed data seem to be consistent with the model. This fact may be due to the adequacy of the log-scale to represent positive measurements for which differences are better relative than absolute
Resumo:
This paper analyzes repeated procurement of services as a four-stage game divided into two periods. In each period there is (1) a contest stage à la Tullock in which the principal selects an agent and (2) a service stage in which the selected agent provides a service. Since this service effort is non-verifiable, the principal faces a moral hazard problem at the service stages. This work considers how the principal should design the period-two contest to mitigate the moral hazard problem in the period-one service stage and to maximize total service and contest efforts. It is shown that the principal must take account of the agent's past service effort in the period-two contest success function. The results indicate that the optimal way to introduce this `bias' is to choose a certain degree of complementarity between past service and current contest efforts. This result shows that contests with `additive bias' (`multiplicative bias') are optimal in incentive problems when effort cost is low (high). Furthermore, it is shown that the severity of the moral hazard problem increases with the cost of service effort (compared to the cost of contest effort) and the number of agents. Finally, the results are extended to more general contest success functions. JEL classification: C72; D82 Key words: Biased contests; Moral Hazard; Repeated Game; Incentives.