976 resultados para Explicit hazard model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tropical Cyclones are a continuing threat to life and property. Willoughby (2012) found that a Pareto (power-law) cumulative distribution fitted to the most damaging 10% of US hurricane seasons fit their impacts well. Here, we find that damage follows a Pareto distribution because the assets at hazard follow a Zipf distribution, which can be thought of as a Pareto distribution with exponent 1. The Z-CAT model is an idealized hurricane catastrophe model that represents a coastline where populated places with Zipf- distributed assets are randomly scattered and damaged by virtual hurricanes with sizes and intensities generated through a Monte-Carlo process. Results produce realistic Pareto exponents. The ability of the Z-CAT model to simulate different climate scenarios allowed testing of sensitivities to Maximum Potential Intensity, landfall rates and building structure vulnerability. The Z-CAT model results demonstrate that a statistical significant difference in damage is found when only changes in the parameters create a doubling of damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents an account and analysis of published mainland Chinese media coverage surrounding three major events of public protest during the Hu-Wen era (2003-2013). The research makes a qualitative analysis of printed material drawn from a range of news outlets, differentiated by their specific political and commercial affiliations. The goal of the research is to better understand the role of mainstream media in social conflict resolution, a hitherto under-studied area, and to identify gradations within the ostensibly monolithic mainland Chinese media on issues of political sensitivity. China’s modern media formation displays certain characteristics of Anglophone media at its hyper-commercialised, populist core. However, the Chinese state retains an explicit, though often ambiguous, remit to engage with news production. Because of this, Chinese newspapers are often assumed to be one-dimensional propaganda ‘tools’ and, accordingly, easily dismissed from analyses of public protest. This research finds that, in an area where political actors have rescinded their monopoly on communicative power, a result of both policy decisions and the rise of Internet-based media platforms, established purveyors of news have acquired greater latitude to report on hitherto sensitive episodes of conflict but do so under the burden of having to correctly guide public opinion. The thesis examines the discursive resources that are deployed in this task, as well as reporting patterns which are suggestive of a new propaganda approach to handling social conflict within public media. Beside the explicitly political nature of coverage of protest events, the study sheds lights on gradations within China’s complex, hybrid media landscape both in terms of institutional purpose and qualitative performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The section of CN railway between Vancouver and Kamloops runs along the base of many hazardous slopes, including the White Canyon, which is located just outside the town of Lytton, BC. The slope has a history of frequent rockfall activity, which presents a hazard to the railway below. Rockfall inventories can be used to understand the frequency-magnitude relationship of events on hazardous slopes, however it can be difficult to consistently and accurately identify rockfall source zones and volumes on large slopes with frequent activity, leaving many inventories incomplete. We have studied this slope as a part of the Canadian Railway Ground Hazard Research Program and have collected remote sensing data, including terrestrial laser scanning (TLS), photographs, and photogrammetry data since 2012, and used change detection to identify rockfalls on the slope. The objective of this thesis is to use a subset of this data to understand how rockfalls identified from TLS data could be used to understand the frequency-magnitude relationship of rockfalls on the slope. This includes incorporating both new and existing methods to develop a semi-automated workflow to extract rockfall events from the TLS data. We show that these methods can be used to identify events as small as 0.01 m3 and that the duration between scans can have an effect on the frequency-magnitude relationship of the rockfalls. We also show that by incorporating photogrammetry data into our analysis, we can create a 3D geological model of the slope and use this to classify rockfalls by lithology, to further understand the rockfall failure patterns. When relating the rockfall activity to triggering factors, we found that the amount of precipitation occurring over the winter has an effect on the overall rockfall frequency for the remainder of the year. These results can provide the railways with a more complete inventory of events compared to records created through track inspection, or rockfall monitoring systems that are installed on the slope. In addition, we can use the database to understand the spatial and temporal distribution of events. The results can also be used as an input to rockfall modelling programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An investigation into karst hazard in southern Ontario has been undertaken with the intention of leading to the development of predictive karst models for this region. The reason these are not currently feasible is a lack of sufficient karst data, though this is not entirely due to the lack of karst features. Geophysical data was collected at Lake on the Mountain, Ontario as part of this karst investigation. This data was collected in order to validate the long-standing hypothesis that Lake on the Mountain was formed from a sinkhole collapse. Sub-bottom acoustic profiling data was collected in order to image the lake bottom sediments and bedrock. Vertical bedrock features interpreted as solutionally enlarged fractures were taken as evidence for karst processes on the lake bottom. Additionally, the bedrock topography shows a narrower and more elongated basin than was previously identified, and this also lies parallel to a mapped fault system in the area. This suggests that Lake on the Mountain was formed over a fault zone which also supports the sinkhole hypothesis as it would provide groundwater pathways for karst dissolution to occur. Previous sediment cores suggest that Lake on the Mountain would have formed at some point during the Wisconsinan glaciation with glacial meltwater and glacial loading as potential contributing factors to sinkhole development. A probabilistic karst model for the state of Kentucky, USA, has been generated using the Weights of Evidence method. This model is presented as an example of the predictive capabilities of these kind of data-driven modelling techniques and to show how such models could be applied to karst in Ontario. The model was able to classify 70% of the validation dataset correctly while minimizing false positive identifications. This is moderately successful and could stand to be improved. Finally, suggestions to improving the current karst model of southern Ontario are suggested with the goal of increasing investigation into karst in Ontario and streamlining the reporting system for sinkholes, caves, and other karst features so as to improve the current Ontario karst database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Pico de Navas landslide was a large-magnitude rotational movement, affecting 50x106m3 of hard to soft rocks. The objectives of this study were: (1) to characterize the landslide in terms of geology, geomorphological features and geotechnical parameters; and (2) to obtain an adequate geomechanical model to comprehensively explain its rupture, considering topographic, hydro-geological and geomechanical conditions. The rupture surface crossed, from top to bottom: (a) more than 200 m of limestone and clay units of the Upper Cretaceous, affected by faults; and (b) the Albian unit of Utrillas facies composed of silty sand with clay (Kaolinite) of the Lower Cretaceous. This sand played an important role in the basal failure of the slide due to the influence of fine particles (silt and clay), which comprised on average more than 70% of the sand, and the high content presence of kaolinite (>40%) in some beds. Its geotechnical parameters are: unit weight (δ) = 19-23 KN/m3; friction angle (φ) = 13º-38º and cohesion (c) = 10-48 KN/m2. Its microstructure consists of accumulations of kaolinite crystals stuck to terrigenous grains, making clayey peds. We hypothesize that the presence of these aggregates was the internal cause of fluidification of this layer once wet. Besides the faulted structure of the massif, other conditioning factors of the movement were: the large load of the upper limestone layers; high water table levels; high water pore pressure; and the loss of strength due to wet conditions. The 3D simulation of the stability conditions concurs with our hypothesis. The landslide occurred in the Recent or Middle Holocene, certainly before at least 500 BC and possibly during a wet climate period. Today, it appears to be inactive. This study helps to understand the frequent slope instabilities all along the Iberian Range when facies Utrillas is present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The category of rational SO(2)--equivariant spectra admits an algebraic model. That is, there is an abelian category A(SO(2)) whose derived category is equivalent to the homotopy category of rational$SO(2)--equivariant spectra. An important question is: does this algebraic model capture the smash product of spectra? The category A(SO(2)) is known as Greenlees' standard model, it is an abelian category that has no projective objects and is constructed from modules over a non--Noetherian ring. As a consequence, the standard techniques for constructing a monoidal model structure cannot be applied. In this paper a monoidal model structure on A(SO(2)) is constructed and the derived tensor product on the homotopy category is shown to be compatible with the smash product of spectra. The method used is related to techniques developed by the author in earlier joint work with Roitzheim. That work constructed a monoidal model structure on Franke's exotic model for the K_(p)--local stable homotopy category. A monoidal Quillen equivalence to a simpler monoidal model category that has explicit generating sets is also given. Having monoidal model structures on the two categories removes a serious obstruction to constructing a series of monoidal Quillen equivalences between the algebraic model and rational SO(2)--equivariant spectra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The erosion processes resulting from flow of fluids (gas-solid or liquid-solid) are encountered in nature and many industrial processes. The common feature of these erosion processes is the interaction of the fluid (particle) with its boundary thus resulting in the loss of material from the surface. This type of erosion in detrimental to the equipment used in pneumatic conveying systems. The puncture of pneumatic conveyor bends in industry causes several problems. Some of which are: (1) Escape of the conveyed product causing health and dust hazard; (2) Repairing and cleaning up after punctures necessitates shutting down conveyors, which will affect the operation of the plant, thus reducing profitability. The most common occurrence of process failure in pneumatic conveying systems is when pipe sections at the bends wear away and puncture. The reason for this is particles of varying speed, shape, size and material properties strike the bend wall with greater intensity than in straight sections of the pipe. Currently available models for predicting the lifetime of bends are inaccurate (over predict by 80%. The provision of an accurate predictive method would lead to improvements in the structure of the planned maintenance programmes of processes, thus reducing unplanned shutdowns and ultimately the downtime costs associated with these unplanned shutdowns. This is the main motivation behind the current research. The paper reports on two aspects of the first phases of the study-undertaken for the current project. These are (1) Development and implementation; and (2) Testing of the modelling environment. The model framework encompasses Computational Fluid Dynamics (CFD) related engineering tools, based on Eulerian (gas) and Lagrangian (particle) approaches to represent the two distinct conveyed phases, to predict the lifetime of conveyor bends. The method attempts to account for the effect of erosion on the pipe wall via particle impacts, taking into account the angle of attack, impact velocity, shape/size and material properties of the wall and conveyed material, within a CFD framework. Only a handful of researchers use CFD as the basis of predicting the particle motion, see for example [1-4] . It is hoped that this would lead to more realistic predictions of the wear profile. Results, for two, three-dimensional test cases using the commercially available CFD PHOENICS are presented. These are reported in relation to the impact intensity and sensitivity to the inlet particle distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a bidomain fire-diffuse-fire model that facilitates mathematical analysis of propagating waves of elevated intracellular calcium (Ca) in living cells. Modelling Ca release as a threshold process allows the explicit construction of travelling wave solutions to probe the dependence of Ca wave speed on physiologically important parameters such as the threshold for Ca release from the endoplasmic reticulum (ER) to the cytosol, the rate of Ca resequestration from the cytosol to the ER, and the total [Ca] (cytosolic plus ER). Interestingly, linear stability analysis of the bidomain fire-diffuse-fire model predicts the onset of dynamic wave instabilities leading to the emergence of Ca waves that propagate in a back-and-forth manner. Numerical simulations are used to confirm the presence of these so-called "tango waves" and the dependence of Ca wave speed on the total [Ca]. The original publication is available at www.springerlink.com (Journal of Mathematical Biology)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a scheme in which the masses of the heavier leptons obey seesaw type relations. The light lepton masses, except the electron and the electron neutrino ones, are generated by one loop level radiative corrections. We work in a version of the 3-3-1 electroweak model that predicts singlets (charged and neutral) of heavy leptons beyond the known ones. An extra U(1)(Omega) symmetry is introduced in order to avoid the light leptons getting masses at the tree level. The electron mass induces an explicit symmetry breaking at U(1)(Omega). We discuss also the mixing matrix among four neutrinos. The new energy scale required is not higher than a few TeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation provides a novel theory of securitization based on intermediaries minimizing the moral hazard that insiders can misuse assets held on-balance sheet. The model predicts how intermediaries finance different assets. Under deposit funding, the moral hazard is greatest for low-risk assets that yield sizable returns in bad states of nature; under securitization, it is greatest for high-risk assets that require high guarantees and large reserves. Intermediaries thus securitize low-risk assets. In an extension, I identify a novel channel through which government bailouts exacerbate the moral hazard and reduce total investment irrespective of the funding mode. This adverse effect is stronger under deposit funding, implying that intermediaries finance more risky assets off-balance sheet. The dissertation discusses the implications of different forms of guarantees. With explicit guarantees, banks securitize assets with either low information-intensity or low risk. By contrast, with implicit guarantees, banks only securitize assets with high information-intensity and low risk. Two extensions to the benchmark static and dynamic models are discussed. First, an extension to the static model studies the optimality of tranching versus securitization with guarantees. Tranching eliminates agency costs but worsens adverse selection, while securitization with guarantees does the opposite. When the quality of underlying assets in a certain security market is sufficiently heterogeneous, and when the highest quality assets are perceived to be sufficiently safe, securitization with guarantees dominates tranching. Second, in an extension to the dynamic setting, the moral hazard of misusing assets held on-balance sheet naturally gives rise to the moral hazard of weak ex-post monitoring in securitization. The use of guarantees reduces the dependence of banks' ex-post payoffs on monitoring efforts, thereby weakening monitoring incentives. The incentive to monitor under securitization with implicit guarantees is the weakest among all funding modes, as implicit guarantees allow banks to renege on their monitoring promises without being declared bankrupt and punished.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a spectrally-negative Markov additive process as a model of a risk process in a random environment. Following recent interest in alternative ruin concepts, we assume that ruin occurs when an independent Poissonian observer sees the process as negative, where the observation rate may depend on the state of the environment. Using an approximation argument and spectral theory, we establish an explicit formula for the resulting survival probabilities in this general setting. We also discuss an efficient evaluation of the involved quantities and provide a numerical illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia - UL