976 resultados para Explicit hazard model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-potential (SP) data are of interest to vadose zone hydrology because of their direct sensitivity to water flow and ionic transport. There is unfortunately little consensus in the literature about how to best model SP data under partially saturated conditions, and different approaches (often supported by one laboratory data set alone) have been proposed. We argue that this lack of agreement can largely be traced to electrode effects that have not been properly taken into account. A series of drainage and imbibition experiments were considered in which we found that previously proposed approaches to remove electrode effects were unlikely to provide adequate corrections. Instead, we explicitly modeled the electrode effects together with classical SP contributions using a flow and transport model. The simulated data agreed overall with the observed SP signals and allowed decomposing the different signal contributions to analyze them separately. After reviewing other published experimental data, we suggest that most of them include electrode effects that have not been properly taken into account. Our results suggest that previously presented SP theory works well when considering the modeling uncertainties presently associated with electrode effects. Additional work is warranted to not only develop suitable electrodes for laboratory experiments but also to assure that associated electrode effects that appear inevitable in longer term experiments are predictable, so that they can be incorporated into the modeling framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of p H and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En 1981, le gouvernement de l'Alberta a amélioré la surveillance de la pointe sud « South Peak » de la montagne Turtle, sur la frontière sud du glissement Frank de 1903. Le programme de surveillance vise à comprendre les taux de déformation des fissures larges et profondes sur « South Peak », et à prédire une seconde avalanche rocheuse sur la montagne. Le programme de surveillance consiste à installer un complément de points statiques et de stations suivies à distance, qui sont mesurés périodiquement. Des données climatiques, microsismiques et de déformation sont recueillies automatiquement à intervalles journaliers, et sont archivées. À la fin des années 1980, le financement pour le développement du programme de surveillance a cessé et quelques installations se sont détériorées. Entre mai 2004 et septembre 2006, des lectures sur les points de surveillance encore fonctionnels ont été compilées et interprétées. De plus, les lectures prélevées auparavant ont été réinterprétées à partir des connaissances récentes sur les modèles de mouvement à court terme et les influences climatiques. Ces observations ont été comparées à des récentes observations aériennes d'un modèle digital d'élévation, provenant de « light detection and ranging (LiDAR) », et des photos de terrain, afin d'estimer plus précisément les taux, l'étendue et la distribution des mouvements pour les derniers 25 ans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Daily precipitation is recorded as the total amount of water collected by a rain-gauge in 24 h. Events are modelled as a Poisson process and the 24 h precipitation by a Generalised Pareto Distribution (GPD) of excesses. Hazard assessment is complete when estimates of the Poisson rate and the distribution parameters, together with a measure of their uncertainty, are obtained. The shape parameter of the GPD determines the support of the variable: Weibull domain of attraction (DA) corresponds to finite support variables as should be for natural phenomena. However, Fréchet DA has been reported for daily precipitation, which implies an infinite support and a heavy-tailed distribution. Bayesian techniques are used to estimate the parameters. The approach is illustrated with precipitation data from the Eastern coast of the Iberian Peninsula affected by severe convective precipitation. The estimated GPD is mainly in the Fréchet DA, something incompatible with the common sense assumption of that precipitation is a bounded phenomenon. The bounded character of precipitation is then taken as a priori hypothesis. Consistency of this hypothesis with the data is checked in two cases: using the raw-data (in mm) and using log-transformed data. As expected, a Bayesian model checking clearly rejects the model in the raw-data case. However, log-transformed data seem to be consistent with the model. This fact may be due to the adequacy of the log-scale to represent positive measurements for which differences are better relative than absolute

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life cycle analyses (LCA) approaches require adaptation to reflect the increasing delocalization of production to emerging countries. This work addresses this challenge by establishing a country-level, spatially explicit life cycle inventory (LCI). This study comprises three separate dimensions. The first dimension is spatial: processes and emissions are allocated to the country in which they take place and modeled to take into account local factors. Emerging economies China and India are the location of production, the consumption occurs in Germany, an Organisation for Economic Cooperation and Development country. The second dimension is the product level: we consider two distinct textile garments, a cotton T-shirt and a polyester jacket, in order to highlight potential differences in the production and use phases. The third dimension is the inventory composition: we track CO2, SO2, NO (x), and particulates, four major atmospheric pollutants, as well as energy use. This third dimension enriches the analysis of the spatial differentiation (first dimension) and distinct products (second dimension). We describe the textile production and use processes and define a functional unit for a garment. We then model important processes using a hierarchy of preferential data sources. We place special emphasis on the modeling of the principal local energy processes: electricity and transport in emerging countries. The spatially explicit inventory is disaggregated by country of location of the emissions and analyzed according to the dimensions of the study: location, product, and pollutant. The inventory shows striking differences between the two products considered as well as between the different pollutants considered. For the T-shirt, over 70% of the energy use and CO2 emissions occur in the consuming country, whereas for the jacket, more than 70% occur in the producing country. This reversal of proportions is due to differences in the use phase of the garments. For SO2, in contrast, over two thirds of the emissions occur in the country of production for both T-shirt and jacket. The difference in emission patterns between CO2 and SO2 is due to local electricity processes, justifying our emphasis on local energy infrastructure. The complexity of considering differences in location, product, and pollutant is rewarded by a much richer understanding of a global production-consumption chain. The inclusion of two different products in the LCI highlights the importance of the definition of a product's functional unit in the analysis and implications of results. Several use-phase scenarios demonstrate the importance of consumer behavior over equipment efficiency. The spatial emission patterns of the different pollutants allow us to understand the role of various energy infrastructure elements. The emission patterns furthermore inform the debate on the Environmental Kuznets Curve, which applies only to pollutants which can be easily filtered and does not take into account the effects of production displacement. We also discuss the appropriateness and limitations of applying the LCA methodology in a global context, especially in developing countries. Our spatial LCI method yields important insights in the quantity and pattern of emissions due to different product life cycle stages, dependent on the local technology, emphasizing the importance of consumer behavior. From a life cycle perspective, consumer education promoting air-drying and cool washing is more important than efficient appliances. Spatial LCI with country-specific data is a promising method, necessary for the challenges of globalized production-consumption chains. We recommend inventory reporting of final energy forms, such as electricity, and modular LCA databases, which would allow the easy modification of underlying energy infrastructure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this dissertation is to find and provide the basis for a managerial tool that allows a firm to easily express its business logic. The methodological basis for this work is design science, where the researcher builds an artifact to solve a specific problem. In this case the aim is to provide an ontology that makes it possible to explicit a firm's business model. In other words, the proposed artifact helps a firm to formally describe its value proposition, its customers, the relationship with them, the necessary intra- and inter-firm infrastructure and its profit model. Such an ontology is relevant because until now there is no model that expresses a company's global business logic from a pure business point of view. Previous models essentially take an organizational or process perspective or cover only parts of a firm's business logic. The four main pillars of the ontology, which are inspired by management science and enterprise- and processmodeling, are product, customer interface, infrastructure and finance. The ontology is validated by case studies, a panel of experts and managers. The dissertation also provides a software prototype to capture a company's business model in an information system. The last part of the thesis consists of a demonstration of the value of the ontology in business strategy and Information Systems (IS) alignment. Structure of this thesis: The dissertation is structured in nine parts: Chapter 1 presents the motivations of this research, the research methodology with which the goals shall be achieved and why this dissertation present a contribution to research. Chapter 2 investigates the origins, the term and the concept of business models. It defines what is meant by business models in this dissertation and how they are situated in the context of the firm. In addition this chapter outlines the possible uses of the business model concept. Chapter 3 gives an overview of the research done in the field of business models and enterprise ontologies. Chapter 4 introduces the major contribution of this dissertation: the business model ontology. In this part of the thesis the elements, attributes and relationships of the ontology are explained and described in detail. Chapter 5 presents a case study of the Montreux Jazz Festival which's business model was captured by applying the structure and concepts of the ontology. In fact, it gives an impression of how a business model description based on the ontology looks like. Chapter 6 shows an instantiation of the ontology into a prototype tool: the Business Model Modelling Language BM2L. This is an XML-based description language that allows to capture and describe the business model of a firm and has a large potential for further applications. Chapter 7 is about the evaluation of the business model ontology. The evaluation builds on literature review, a set of interviews with practitioners and case studies. Chapter 8 gives an outlook on possible future research and applications of the business model ontology. The main areas of interest are alignment of business and information technology IT/information systems IS and business model comparison. Finally, chapter 9 presents some conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of new advances in mobile computing has changed the manner we do our daily work, even enabling us to perform collaborative activities. However, current groupware approaches do not offer an integrating and efficient solution that jointly tackles the flexibility and heterogeneity inherent to mobility as well as the awareness aspects intrinsic to collaborative environments. Issues related to the diversity of contexts of use are collected under the term plasticity. A great amount of tools have emerged offering a solution to some of these issues, although always focused on individual scenarios. We are working on reusing and specializing some already existing plasticity tools to the groupware design. The aim is to offer the benefits from plasticity and awareness jointly, trying to reach a real collaboration and a deeper understanding of multi-environment groupware scenarios. In particular, this paper presents a conceptual framework aimed at being a reference for the generation of plastic User Interfaces for collaborative environments in a systematic and comprehensive way. Starting from a previous conceptual framework for individual environments, inspired on the model-based approach, we introduce specific components and considerations related to groupware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Snow cover is an important control in mountain environments and a shift of the snow-free period triggered by climate warming can strongly impact ecosystem dynamics. Changing snow patterns can have severe effects on alpine plant distribution and diversity. It thus becomes urgent to provide spatially explicit assessments of snow cover changes that can be incorporated into correlative or empirical species distribution models (SDMs). Here, we provide for the first time a with a lower overestimation comparison of two physically based snow distribution models (PREVAH and SnowModel) to produce snow cover maps (SCMs) at a fine spatial resolution in a mountain landscape in Austria. SCMs have been evaluated with SPOT-HRVIR images and predictions of snow water equivalent from the two models with ground measurements. Finally, SCMs of the two models have been compared under a climate warming scenario for the end of the century. The predictive performances of PREVAH and SnowModel were similar when validated with the SPOT images. However, the tendency to overestimate snow cover was slightly lower with SnowModel during the accumulation period, whereas it was lower with PREVAH during the melting period. The rate of true positives during the melting period was two times higher on average with SnowModel with a lower overestimation of snow water equivalent. Our results allow for recommending the use of SnowModel in SDMs because it better captures persisting snow patches at the end of the snow season, which is important when modelling the response of species to long-lasting snow cover and evaluating whether they might survive under climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the most recent years the Regió de Girona has clearly transformed its territorial model, superimposing an emerging clearly hierarchic structure to a polycentric one. This way, Girona and its urban area has gained a diversified centrality. This transformation, though, needs a clearly defined project that, adapted to the current dynamism, makes explicit and supports or corrects the resultant territorial model in order to avoid infrastructural shortages, territorial imbalances, resource wasting and negative impact on the environment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theme of the research is the development of the domain of marketing knowledge in the design of agricultural machinery. It is developed throughout the design of agricultural machinery in order to identify the corporate and customers needs and to develop strategies to satisfy these needs. The central problem of the research questions which marketing tools to apply on pre-development process of farm machinery, in order to increase the market value of the products and of the company and, consequently, generate competitive advantage to the manufacturers of agricultural machinery. As methodology, it was developed bibliographical research and multicase study of the development process of agricultural machinery developed by small, medium and large companies and the academy. As a result, a marketing reference model was elaborated for the pre-development stage of agricultural machinery, which outlines the activities, tasks, mechanisms and controls that can be used in strategic planning and in products planning of agricultural machinery manufacturers, contributing to explain the explicit knowledge in the marketing field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.