976 resultados para Explicit hazard model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Our aim was to determine if insomnia severity, dysfunctional beliefs about sleep, and depression predicted sleep-related safety behaviors. Method: Standard sleep-related measures (such as the Insomnia Severity Index; the Dysfunctional Beliefs About Sleep scale; the Depression, Anxiety, and Stress Scale; and the Sleep-Related Behaviors Questionnaire) were administered. Additionally, 14 days of sleep diary (Pittsburg Sleep Diary) data and actual use of sleep-related behaviors were collected. Results: Regression analysis revealed that dysfunctional beliefs about sleep predicted sleep-related safety behaviors. Insomnia severity did not predict sleep-related safety behaviors. Depression accounted for the greatest amount of unique variance in the prediction of safety behaviors, followed by dysfunctional beliefs. Exploratory analysis revealed that participants with higher levels of depression used more sleep-related behaviors and reported greater dysfunctional beliefs about their sleep. Conclusion: The findings underlie the significant influence that dysfunctional beliefs have on individuals' behaviors. Moreover, the results suggest that depression may need to be considered as an explicit component of cognitive-behavioral models of insomnia. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a simple model that captures the salient properties of distribution networks, and study the possible occurrence of blackouts, i.e., sudden failings of large portions of such networks. The model is defined on a random graph of finite connectivity. The nodes of the graph represent hubs of the network, while the edges of the graph represent the links of the distribution network. Both, the nodes and the edges carry dynamical two state variables representing the functioning or dysfunctional state of the node or link in question. We describe a dynamical process in which the breakdown of a link or node is triggered when the level of maintenance it receives falls below a given threshold. This form of dynamics can lead to situations of catastrophic breakdown, if levels of maintenance are themselves dependent on the functioning of the net, once maintenance levels locally fall below a critical threshold due to fluctuations. We formulate conditions under which such systems can be analyzed in terms of thermodynamic equilibrium techniques, and under these conditions derive a phase diagram characterizing the collective behavior of the system, given its model parameters. The phase diagram is confirmed qualitatively and quantitatively by simulations on explicit realizations of the graph, thus confirming the validity of our approach. © 2007 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new approach to designing large organizational databases. The approach emphasizes the need for a holistic approach to the design process. The development of the proposed approach was based on a comprehensive examination of the issues of relevance to the design and utilization of databases. Such issues include conceptual modelling, organization theory, and semantic theory. The conceptual modelling approach presented in this thesis is developed over three design stages, or model perspectives. In the semantic perspective, concept definitions were developed based on established semantic principles. Such definitions rely on meaning - provided by intension and extension - to determine intrinsic conceptual definitions. A tool, called meaning-based classification (MBC), is devised to classify concepts based on meaning. Concept classes are then integrated using concept definitions and a set of semantic relations which rely on concept content and form. In the application perspective, relationships are semantically defined according to the application environment. Relationship definitions include explicit relationship properties and constraints. The organization perspective introduces a new set of relations specifically developed to maintain conformity of conceptual abstractions with the nature of information abstractions implied by user requirements throughout the organization. Such relations are based on the stratification of work hierarchies, defined elsewhere in the thesis. Finally, an example of an application of the proposed approach is presented to illustrate the applicability and practicality of the modelling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety enforcement practitioners within Europe and marketers, designers or manufacturers of consumer products need to determine compliance with the legal test of "reasonable safety" for consumer goods, to reduce the "risks" of injury to the minimum. To enable freedom of movement of products, a method for safety appraisal is required for use as an "expert" system of hazard analysis by non-experts in safety testing of consumer goods for implementation consistently throughout Europe. Safety testing approaches and the concept of risk assessment and hazard analysis are reviewed in developing a model for appraising consumer product safety which seeks to integrate the human factors contribution of risk assessment, hazard perception, and information processing. The model develops a system of hazard identification, hazard analysis and risk assessment which can be applied to a wide range of consumer products through use of a series of systematic checklists and matrices and applies alternative numerical and graphical methods for calculating a final product safety risk assessment score. It is then applied in its pilot form by selected "volunteer" Trading Standards Departments to a sample of consumer products. A series of questionnaires is used to select participating Trading Standards Departments, to explore the contribution of potential subjective influences, to establish views regarding the usability and reliability of the model and any preferences for the risk assessment scoring system used. The outcome of the two stage hazard analysis and risk assessment process is considered to determine consistency in results of hazard analysis, final decisions regarding the safety of the sample product and to determine any correlation in the decisions made using the model and alternative scoring methods of risk assessment. The research also identifies a number of opportunities for future work, and indicates a number of areas where further work has already begun.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics (MD) has been used to identify the relative distribution of dysprosium in the phosphate glass DyAl0.30P3.05O9.62. The MD model has been compared directly with experimental data obtained from neutron diffraction to enable a detailed comparison beyond the total structure factor level. The MD simulation gives Dy ... Dy correlations at 3.80(5) and 6.40(5) angstrom with relative coordination numbers of 0.8(1) and 7.3(5), thus providing evidence of minority rare-earth clustering within these glasses. The nearest neighbour Dy-O peak occurs at 2.30 angstrom with each Dy atom having on average 5.8 nearest neighbour oxygen atoms. The MD simulation is consistent with the phosphate network model based on interlinked PO4 tetrahedra where the addition of network modifiers Dy3+ depolymerizes the phosphate network through the breakage of P-(O)-P bonds whilst leaving the tetrahedral units intact. The role of aluminium within the network has been taken into explicit account, and A1 is found to be predominantly (78 tetrahedrally coordinated. In fact all four A1 bonds are found to be to P (via an oxygen atom) with negligible amounts of Al-O-Dy bonds present. This provides an important insight into the role of Al additives in improving the mechanical properties of these glasses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Batch-mode reverse osmosis (batch-RO) operation is considered a promising desalination method due to its low energy requirement compared to other RO system arrangements. To improve and predict batch-RO performance, studies on concentration polarization (CP) are carried out. The Kimura-Sourirajan mass-transfer model is applied and validated by experimentation with two different spiral-wound RO elements. Explicit analytical Sherwood correlations are derived based on experimental results. For batch-RO operation, a new genetic algorithm method is developed to estimate the Sherwood correlation parameters, taking into account the effects of variation in operating parameters. Analytical procedures are presented, then the mass transfer coefficient models are developed for different operation processes, i.e., batch-RO and continuous RO. The CP related energy loss in batch-RO operation is quantified based on the resulting relationship between feed flow rates and mass transfer coefficients. It is found that CP increases energy consumption in batch-RO by about 25% compared to the ideal case in which CP is absent. For continuous RO process, the derived Sherwood correlation predicted CP accurately. In addition, we determined the optimum feed flow rate of our batch-RO system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural monitoring and dynamic identification of the manmade and natural hazard objects is under consideration. Math model of testing object by set of weak stationary dynamic actions is offered. The response of structures to the set of signals is under processing for getting important information about object condition in high frequency band. Making decision procedure into active monitoring system is discussed as well. As an example the monitoring outcome of pillar-type monument is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A hidden Markov state model has been applied to classical molecular dynamics simulated small peptide in explicit water. The methodology allows increasing the time resolution of the model and describe the dynamics with the precision of 0.3 ps (comparing to 6 ps for the standard methodology). It also permits the investigation of the mechanisms of transitions between the conformational states of the peptide. The detailed description of one of such transitions for the studied molecule is presented. © 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AMS Subj. Classification: 83C15, 83C35

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quantization scheme is suggested for a spatially inhomogeneous 1+1 Bianchi I model. The scheme consists in quantization of the equations of motion and gives the operator (so called quasi-Heisenberg) equations describing explicit evolution of a system. Some particular gauge suitable for quantization is proposed. The Wheeler-DeWitt equation is considered in the vicinity of zero scale factor and it is used to construct a space where the quasi-Heisenberg operators act. Spatial discretization as a UV regularization procedure is suggested for the equations of motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding who evacuates and who does not has been one of the cornerstones of research on the pre-impact phase of both natural and technological hazards. Its history is rich in descriptive illustrations focusing on lists of characteristics of those who flee to safety. Early models of evacuation focused almost exclusively on the relationship between whether warnings were heard and ultimately believed and evacuation behavior. How people came to believe these warnings and even how they interpreted the warnings were not incorporated. In fact, the individual seemed almost removed from the picture with analysis focusing exclusively on external measures. ^ This study built and tested a more comprehensive model of evacuation that centers on the decision-making process, rather than decision outcomes. The model focused on three important factors that alter and shape the evacuation decision-making landscape. These factors are: individual level indicators which exist independently of the hazard itself and act as cultural lenses through which information is heard, processed and interpreted; hazard specific variables that directly relate to the specific hazard threat; and risk perception. The ultimate goal is to determine what factors influence the evacuation decision-making process. Using data collected for 1998's Hurricane Georges, logistic regression models were used to evaluate how well the three main factors help our understanding of how individuals come to their decisions to either flee to safety during a hurricane or remain in their homes. ^ The results of the logistic regression were significant emphasizing that the three broad types of factors tested in the model influence the decision making process. Conclusions drawn from the data analysis focus on how decision-making frames are different for those who can be designated “evacuators” and for those in evacuation zones. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Siberian boreal forests are expected to expand northwards in the course of global warming. However, processes of the treeline ecotone transition, as well astiming and related climate feedbacks are still not understood. Here, we present 'Larix Vegetation Simulator' LAVESI, an individual-based spatially-explicit model that can simulate Larix gmelinii (RUPR.) RUPR. stand dynamics in an attempt to improve our understanding about past and future treeline movements under changing climates. The relevant processes (growth, seed production and dispersal, establishment and mortality) are incorporated and adjusted to observation data mainly gained from the literature. Results of a local sensitivity analysis support the robustness of the model's parameterization by giving relatively small sensitivity values. We tested the model by simulating tree stands under modern climate across the whole Taymyr Peninsula, north-central Siberia (c. 64-80° N; 92-119° E). We find tree densities similar to observed forests in the northern to mid-treeline areas, but densities are overestimated in the southern parts of the simulated region. Finally, from a temperature-forcing experiment, we detect that the responses of tree stands lag the hypothetical warming by several decades, until the end of 21st century. With our simulation experiments we demonstrate that the newly-developed model captures the dynamics of the Siberian latitudinal treeline.