924 resultados para Seismic Hazard
Resumo:
International audience
Resumo:
Crosswell data set contains a range of angles limited only by the geometry of the source and receiver configuration, the separation of the boreholes and the depth to the target. However, the wide angles reflections present in crosswell imaging result in amplitude-versus-angle (AVA) features not usually observed in surface data. These features include reflections from angles that are near critical and beyond critical for many of the interfaces; some of these reflections are visible only for a small range of angles, presumably near their critical angle. High-resolution crosswell seismic surveys were conducted over a Silurian (Niagaran) reef at two fields in northern Michigan, Springdale and Coldspring. The Springdale wells extended to much greater depths than the reef, and imaging was conducted from above and from beneath the reef. Combining the results from images obtained from above with those from beneath provides additional information, by exhibiting ranges of angles that are different for the two images, especially for reflectors at shallow depths, and second, by providing additional constraints on the solutions for Zoeppritz equations. Inversion of seismic data for impedance has become a standard part of the workflow for quantitative reservoir characterization. Inversion of crosswell data using either deterministic or geostatistical methods can lead to poor results with phase change beyond the critical angle, however, the simultaneous pre-stack inversion of partial angle stacks may be best conducted with restrictions to angles less than critical. Deterministic inversion is designed to yield only a single model of elastic properties (best-fit), while the geostatistical inversion produces multiple models (realizations) of elastic properties, lithology and reservoir properties. Geostatistical inversion produces results with far more detail than deterministic inversion. The magnitude of difference in details between both types of inversion becomes increasingly pronounced for thinner reservoirs, particularly those beyond the vertical resolution of the seismic. For any interface imaged from above and from beneath, the results AVA characters must result from identical contrasts in elastic properties in the two sets of images, albeit in reverse order. An inversion approach to handle both datasets simultaneously, at pre-critical angles, is demonstrated in this work. The main exploration problem for carbonate reefs is determining the porosity distribution. Images of elastic properties, obtained from deterministic and geostatistical simultaneous inversion of a high-resolution crosswell seismic survey were used to obtain the internal structure and reservoir properties (porosity) of Niagaran Michigan reef. The images obtained are the best of any Niagaran pinnacle reef to date.
Resumo:
Geologic hazards affect the lives of millions of people worldwide every year. El Salvador is a country that is regularly affected by natural disasters, including earthquakes, volcanic eruptions and tropical storms. Additionally, rainfall-induced landslides and debris flows are a major threat to the livelihood of thousands. The San Vicente Volcano in central El Salvador has a recurring and destructive pattern of landslides and debris flows occurring on the northern slopes of the volcano. In recent memory there have been at least seven major destructive debris flows on San Vicente volcano. Despite this problem, there has been no known attempt to study the inherent stability of these volcanic slopes and to determine the thresholds of rainfall that might lead to slope instability. This thesis explores this issue and outlines a suggested method for predicting the likelihood of slope instability during intense rainfall events. The material properties obtained from a field campaign and laboratory testing were used for a 2-D slope stability analysis on a recent landslide on San Vicente volcano. This analysis confirmed that the surface materials of the volcano are highly permeable and have very low shear strength and provided insight into the groundwater table behavior during a rainstorm. The biggest factors on the stability of the slopes were found to be slope geometry, rainfall totals and initial groundwater table location. Using the results from this analysis a stability chart was created that took into account these main factors and provided an estimate of the stability of a slope in various rainfall scenarios. This chart could be used by local authorities in the event of a known extreme rainfall event to help make decisions regarding possible evacuation. Recommendations are given to improve the methodology for future application in other areas as well as in central El Salvador.
Resumo:
How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.
Resumo:
This dissertation examines the quality of hazard mitigation elements in a coastal, hazard prone state. I answer two questions. First, in a state with a strong mandate for hazard mitigation elements in comprehensive plans, does plan quality differ among county governments? Second, if such variation exists, what drives this variation? My research focuses primarily on Florida’s 35 coastal counties, which are all at risk for hurricane and flood hazards, and all fall under Florida’s mandate to have a comprehensive plan that includes a hazard mitigation element. Research methods included document review to rate the hazard mitigation elements of all 35 coastal county plans and subsequent analysis against demographic and hazard history factors. Following this, I conducted an electronic, nationwide survey of planning professionals and academics, informed by interviews of planning leaders in Florida counties. I found that hazard mitigation element quality varied widely among the 35 Florida coastal counties, but were close to a normal distribution. No plans were of exceptionally high quality. Overall, historical hazard effects did not correlate with hazard mitigation element quality, but some demographic variables that are associated with urban populations did. The variance in hazard mitigation element quality indicates that while state law may mandate, and even prescribe, hazard mitigation in local comprehensive plans, not all plans will result in equal, or even adequate, protection for people. Furthermore, the mixed correlations with demographic variables representing social and disaster vulnerability shows that, at least at the county level, vulnerability to hazards does not have a strong effect on hazard mitigation element quality. From a theory perspective, my research is significant because it compares assumptions about vulnerability based on hazard history and demographics to plan quality. The only vulnerability-related variables that appeared to correlate, and at that mildly so, with hazard mitigation element quality, were those typically representing more urban areas. In terms of the theory of Neo-Institutionalism and theories related to learning organizations, my research shows that planning departments appear to have set norms and rules of operating that preclude both significant public involvement and learning from prior hazard events.
Resumo:
The damage Hurricane Sandy caused had far-reaching repercussions up and down the East Coast of the United States. Vast coastal flooding accompanied the storm, inundating homes, businesses, and utility and emergency facilities. Since the storm, projects to mitigate similar future floods have been scrutinized. Such projects not only need to keep out floodwaters but also be designed to withstand the effect that climate change might have on rising sea levels and increased flood risk. In this study, we develop an economic model to assess the costs and benefits of a berm (sea wall) to mitigate the effects of flooding from a large storm. We account for the lifecycle costs of the project, which include those for the upfront construction of the berm, ongoing maintenance, land acquisition, and wetland and recreation zone construction. Benefits of the project include avoided fatalities, avoided residential and commercial damages, avoided utility and municipal damages, recreational and health benefits, avoided debris removal expenses, and avoided loss of function of key transportation and commercial infrastructure located in the area. Our estimate of the beneficial effects of the berm includes ecosystem services from wetlands and health benefits to the surrounding community from a park and nature system constructed along the berm. To account for the effects of climate change and verify that the project will maintain its effectiveness over the long term, we allow the risk of flooding to increase over time. Over our 50-year time horizon, we double the risk of 100- and 500-year flood events to account for the effects of sea level rise on coastal flooding. Based on the economic analysis, the project is highly cost beneficial over its 50-year timeframe. This analysis demonstrates that climate change adaptation investments can be cost beneficial even though they mitigate the impacts of low-probability, high-consequence events.
Resumo:
The time for conducting Preventive Maintenance (PM) on an asset is often determined using a predefined alarm limit based on trends of a hazard function. In this paper, the authors propose using both hazard and reliability functions to improve the accuracy of the prediction particularly when the failure characteristic of the asset whole life is modelled using different failure distributions for the different stages of the life of the asset. The proposed method is validated using simulations and case studies.
Resumo:
A complete change of career forces a seismic shift in every aspect of your life. From day one, you have to face the loss of long held beliefs, behaviours, the known world of self, and security. We came from professions that themselves are poles apart, and many of the challenges we faced entering the profession were the same: juggling full-time work, part time study, and family commitmemts, taking a pay cut, and loss of social life. But over a short period of time we both transitioned to our new profession successfully. so what make our successful transition possible?
Resumo:
Principal Topic High technology consumer products such as notebooks, digital cameras and DVD players are not introduced into a vacuum. Consumer experience with related earlier generation technologies, such as PCs, film cameras and VCRs, and the installed base of these products strongly impacts the market diffusion of the new generation products. Yet technology substitution has received only sparse attention in the diffusion of innovation literature. Research for consumer durables has been dominated by studies of (first purchase) adoption (c.f. Bass 1969) which do not explicitly consider the presence of an existing product/technology. More recently, considerable attention has also been given to replacement purchases (c.f. Kamakura and Balasubramanian 1987). Only a handful of papers explicitly deal with the diffusion of technology/product substitutes (e.g. Norton and Bass, 1987: Bass and Bass, 2004). They propose diffusion-type aggregate-level sales models that are used to forecast the overall sales for successive generations. Lacking household data, these aggregate models are unable to give insights into the decisions by individual households - whether to adopt generation II, and if so, when and why. This paper makes two contributions. It is the first large-scale empirical study that collects household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in comparision to traditional analysis that evaluates technology substitution as an ''adoption of innovation'' type process, we propose that from a consumer's perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing the generation I product with generation II). Based on this proposition, we develop and test a number of hypotheses. Methodology/Key Propositions In some cases, successive generations are clear ''substitutes'' for the earlier generation, in that they have almost identical functionality. For example, successive generations of PCs Pentium I to II to III or flat screen TV substituting for colour TV. More commonly, however, the new technology (generation II) is a ''partial substitute'' for existing technology (generation I). For example, digital cameras substitute for film-based cameras in the sense that they perform the same core function of taking photographs. They have some additional attributes of easier copying and sharing of images. However, the attribute of image quality is inferior. In cases of partial substitution, some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Extensive research on innovation adoption has consistently shown consumer innovativeness is the most important consumer characteristic that drives adoption timing (Goldsmith et al. 1995; Gielens and Steenkamp 2007). Hence, we expect consumer innovativeness also to influence both additional and substitute generation II purchases. Hypothesis 1a) More innovative households will make additional generation II purchases earlier. 1 b) More innovative households will make substitute generation II purchases earlier. 1 c) Consumer innovativeness will have a stronger impact on additional generation II purchases than on substitute generation II purchases. As outlined above, substitute generation II purchases act, in part like a replacement purchase for the generation I product. Prior research (Bayus 1991; Grewal et al 2004) identified product age as the most dominant factor influencing replacements. Hence, we hypothesise that: Hypothesis 2: Households with older generation I products will make substitute generation II purchases earlier. Our survey of 8,077 households investigates their adoption of two new generation products: notebooks as a technology change to PCs, and DVD players as a technology shift from VCRs. We employ Cox hazard modelling to study factors influencing the timing of a household's adoption of generation II products. We determine whether this is an additional or substitute purchase by asking whether the generation I product is still used. A separate hazard model is conducted for additional and substitute purchases. Consumer Innovativeness is measured as domain innovativeness adapted from the scales of Goldsmith and Hofacker (1991) and Flynn et al. (1996). The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include age, size and income of household, and age and education of primary decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases (exp = 1.11) and substitute purchases (exp = 1.09). Exp is interpreted as the increased probability of purchase for an increase of 1.0 on a 7-point innovativeness scale. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD (exp = 2.92) and a strong influence for PCs/notebooks (exp = 1.30). Exp is interpreted as the increased probability of purchase for an increase of 10 years in the age of the generation I product. Yet, also as hypothesised, there was no influence on additional purchases. The results lead to two key implications. First, there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. Treating these as a single process will mask the true drivers of adoption. For substitute purchases, product age is a key driver. Hence, implications for marketers of high technology products can utilise data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Suitable mathematical models that are capable of predicting Time-to-Failure (TTF) and the probability of failure in future time are essential. In traditional reliability models, the lifetime of assets is estimated using failure time data. However, in most real-life situations and industry applications, the lifetime of assets is influenced by different risk factors, which are called covariates. The fundamental notion in reliability theory is the failure time of a system and its covariates. These covariates change stochastically and may influence and/or indicate the failure time. Research shows that many statistical models have been developed to estimate the hazard of assets or individuals with covariates. An extensive amount of literature on hazard models with covariates (also termed covariate models), including theory and practical applications, has emerged. This paper is a state-of-the-art review of the existing literature on these covariate models in both the reliability and biomedical fields. One of the major purposes of this expository paper is to synthesise these models from both industrial reliability and biomedical fields and then contextually group them into non-parametric and semi-parametric models. Comments on their merits and limitations are also presented. Another main purpose of this paper is to comprehensively review and summarise the current research on the development of the covariate models so as to facilitate the application of more covariate modelling techniques into prognostics and asset health management.
Resumo:
Background: The proportion of older individuals in the driving population is predicted to increase in the next 50 years. This has important implications for driving safety as abilities which are important for safe driving, such as vision (which accounts for the majority of the sensory input required for driving), processing ability and cognition have been shown to decline with age. The current methods employed for screening older drivers upon re-licensure are also vision based. This study, which investigated social, behavioural and professional aspects involved with older drivers, aimed to determine: (i) if the current visual standards in place for testing upon re-licensure are effective in reducing the older driver fatality rate in Australia; (ii) if the recommended visual standards are actually implemented as part of the testing procedures by Australian optometrists; and (iii) if there are other non-standardised tests which may be better at predicting the on-road incident-risk (including near misses and minor incidents) in older drivers than those tests recommended in the standards. Methods: For the first phase of the study, state-based age- and gender-stratified numbers of older driver fatalities for 2000-2003 were obtained from the Australian Transportation Safety Bureau database. Poisson regression analyses of fatality rates were considered by renewal frequency and jurisdiction (as separate models), adjusting for possible confounding variables of age, gender and year. For the second phase, all practising optometrists in Australia were surveyed on the vision tests they conduct in consultations relating to driving and their knowledge of vision requirements for older drivers. Finally, for the third phase of the study to investigate determinants of on-road incident risk, a stratified random sample of 600 Brisbane residents aged 60 years and were selected and invited to participate using an introductory letter explaining the project requirements. In order to capture the number and type of road incidents which occurred for each participant over 12 months (including near misses and minor incidents), an important component of the prospective research study was the development and validation of a driving diary. The diary was a tool in which incidents that occurred could be logged at that time (or very close in time to which they occurred) and thus, in comparison with relying on participant memory over time, recall bias of incident occurrence was minimised. Association between all visual tests, cognition and scores obtained for non-standard functional tests with retrospective and prospective incident occurrence was investigated. Results: In the first phase,rivers aged 60-69 years had a 33% lower fatality risk (Rate Ratio [RR] = 0.75, 95% CI 0.32-1.77) in states with vision testing upon re-licensure compared with states with no vision testing upon re-licensure, however, because the CIs are wide, crossing 1.00, this result should be regarded with caution. However, overall fatality rates and fatality rates for those aged 70 years and older (RR=1.17, CI 0.64-2.13) did not differ between states with and without license renewal procedures, indicating no apparent benefit in vision testing legislation. For the second phase of the study, nearly all optometrists measured visual acuity (VA) as part of a vision assessment for re-licensing, however, 20% of optometrists did not perform any visual field (VF) testing and only 20% routinely performed automated VF on older drivers, despite the standards for licensing advocating automated VF as part of the vision standard. This demonstrates the need for more effective communication between the policy makers and those responsible for carrying out the standards. It may also indicate that the overall higher driver fatality rate in jurisdictions with vision testing requirements is resultant as the tests recommended by the standards are only partially being conducted by optometrists. Hence a standardised protocol for the screening of older drivers for re-licensure across the nation must be established. The opinions of Australian optometrists with regard to the responsibility of reporting older drivers who fail to meet the licensing standards highlighted the conflict between maintaining patient confidentiality or upholding public safety. Mandatory reporting requirements of those drivers who fail to reach the standards necessary for driving would minimise potential conflict between the patient and their practitioner, and help maintain patient trust and goodwill. The final phase of the PhD program investigated the efficacy of vision, functional and cognitive tests to discriminate between at-risk and safe older drivers. Nearly 80% of the participants experienced an incident of some form over the prospective 12 months, with the total incident rate being 4.65/10 000 km. Sixty-three percent reported having a near miss and 28% had a minor incident. The results from the prospective diary study indicate that the current vision screening tests (VA and VF) used for re-licensure do not accurately predict older drivers who are at increased odds of having an on-road incident. However, the variation in visual measurements of the cohort was narrow, also affecting the results seen with the visual functon questionnaires. Hence a larger cohort with greater variability should be considered for a future study. A slightly lower cognitive level (as measured with the Mini-Mental State Examination [MMSE]) did show an association with incident involvement as did slower reaction time (RT), however the Useful-Field-of-View (UFOV) provided the most compelling results of the study. Cut-off values of UFOV processing (>23.3ms), divided attention (>113ms), selective attention (>258ms) and overall score (moderate/ high/ very high risk) were effective in determining older drivers at increased odds of having any on-road incident and the occurrence of minor incidents. Discussion: The results have shown that for the 60-69 year age-group, there is a potential benefit in testing vision upon licence renewal. However, overall fatality rates and fatality rates for those aged 70 years and older indicated no benefit in vision testing legislation and suggests a need for inclusion of screening tests which better predict on-road incidents. Although VA is routinely performed by Australian optometrists on older drivers renewing their licence, VF is not. Therefore there is a need for a protocol to be developed and administered which would result in standardised methods conducted throughout the nation for the screening of older drivers upon re-licensure. Communication between the community, policy makers and those conducting the protocol should be maximised. By implementing a standardised screening protocol which incorporates a level of mandatory reporting by the practitioner, the ethical dilemma of breaching patient confidentiality would also be resolved. The tests which should be included in this screening protocol, however, cannot solely be ones which have been implemented in the past. In this investigation, RT, MMSE and UFOV were shown to be better determinants of on-road incidents in older drivers than VA and VF, however, as previously mentioned, there was a lack of variability in visual status within the cohort. Nevertheless, it is the recommendation from this investigation, that subject to appropriate sensitivity and specificity being demonstrated in the future using a cohort with wider variation in vision, functional performance and cognition, these tests of cognition and information processing should be added to the current protocol for the screening of older drivers which may be conducted at licensing centres across the nation.
Resumo:
The over represented number of novice drivers involved in crashes is alarming. Driver training is one of the interventions aimed at mitigating the number of crashes that involve young drivers. Experienced drivers have better hazard perception ability compared to inexperienced drivers. Eye gaze patterns have been found to be an indicator of the driver's competency level. The aim of this paper is to develop an in-vehicle system which correlates information about the driver's gaze and vehicle dynamics, which is then used to assist driver trainers in assessing driving competency. This system allows visualization of the complete driving manoeuvre data on interactive maps. It uses an eye tracker and perspective projection algorithms to compute the depth of gaze and plots it on Google maps. This interactive map also features the trajectory of the vehicle and turn indicator usage. This system allows efficient and user friendly analysis of the driving task. It can be used by driver trainers and trainees to understand objectively the risks encountered during driving manoeuvres. This paper presents a prototype that plots the driver's eye gaze depth and direction on an interactive map along with the vehicle dynamics information. This prototype will be used in future to study the difference in gaze patterns in novice and experienced drivers prior to a certain manoeuvre.