917 resultados para Models of organization
Resumo:
Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.
Resumo:
This paper explores the plurality of institutional environments in which standards for the service sector are expected to support the rise of a global knowledge-based economy. Despite the careful wording of the World Trade Organization (WTO), a whole range of international bodies still have the capacity to define technical specifications affecting how services are expected to be traded on worldwide basis. The analysis relies on global political economy approaches to extend to the area of service standards the assumption that the process of globalization is not opposing states and markets, but a joint expression of both of them including new patterns and agents of structural change through formal and informal power and regulatory practices. It analyses on a cross-institutional basis patterns of authority in the institutional setting of service standards in the context of the International Organisation for Standardisation (ISO), the European Union, and the United States. In contrast to conventional views opposing the American system to the ISO/European framework, the paper questions the robustness of this opposition by showing that institutional developments of service standards are likely to face trade-offs and compromises across those systems and between two opposing models of standardisation.
Resumo:
Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
The objective of this research was to analyze the organizational culture of a Brazilian public hospital. It is a descriptive study with quantitative approach of data, developed in a public hospital of São Paulo State, Brazil. The sample was composed by 52 nurses and 146 nursing technicians and auxiliaries. Data were collected from January to June 2011 using the Brazilian Instrument for Assessing Organizational Culture – IBACO. The analysis of the organizational values showed the existence of hierarchical rigidity and centralization of power within the institution, as well as individualism and competition, which hinders teamwork. The values concerning workers’ well-being, satisfaction and motivation were not highly valued. In regard to organizational practices, the promotion of interpersonal relationship, continuous education, and rewarding practices were not valued either. It becomes apparent that traditional models of work organization support work practices and determine the organizational culture of the hospital.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
Here I develop a model of a radiative-convective atmosphere with both radiative and convective schemes highly simplified. The atmospheric absorption of radiation at selective wavelengths makes use of constant mass absorption coefficients in finite width spectral bands. The convective regime is introduced by using a prescribed lapse rate in the troposphere. The main novelty of the radiative-convective model developed here is that it is solved without using any angular approximation for the radiation field. The solution obtained in the purely radiation mode (i. e. with convection ignored) leads to multiple equilibria of stable states, being very similar to some results recently found in simple models of planetary atmospheres. However, the introduction of convective processes removes the multiple equilibria of stable states. This shows the importance of taking convective processes into account even for qualitative analyses of planetary atmosphere
Resumo:
1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.
Resumo:
We study whether the neutron skin thickness Δrnp of 208Pb originates from the bulk or from the surface of the nucleon density distributions, according to the mean-field models of nuclear structure, and find that it depends on the stiffness of the nuclear symmetry energy. The bulk contribution to Δrnp arises from an extended sharp radius of neutrons, whereas the surface contribution arises from different widths of the neutron and proton surfaces. Nuclear models where the symmetry energy is stiff, as typical of relativistic models, predict a bulk contribution in Δrnp of 208Pb about twice as large as the surface contribution. In contrast, models with a soft symmetry energy like common nonrelativistic models predict that Δrnp of 208Pb is divided similarly into bulk and surface parts. Indeed, if the symmetry energy is supersoft, the surface contribution becomes dominant. We note that the linear correlation of Δrnp of 208Pb with the density derivative of the nuclear symmetry energy arises from the bulk part of Δrnp. We also note that most models predict a mixed-type (between halo and skin) neutron distribution for 208Pb. Although the halo-type limit is actually found in the models with a supersoft symmetry energy, the skin-type limit is not supported by any mean-field model. Finally, we compute parity-violating electron scattering in the conditions of the 208Pb parity radius experiment (PREX) and obtain a pocket formula for the parity-violating asymmetry in terms of the parameters that characterize the shape of the 208Pb nucleon densities.
Resumo:
Understanding the structure of interphase chromosomes is essential to elucidate regulatory mechanisms of gene expression. During recent years, high-throughput DNA sequencing expanded the power of chromosome conformation capture (3C) methods that provide information about reciprocal spatial proximity of chromosomal loci. Since 2012, it is known that entire chromatin in interphase chromosomes is organized into regions with strongly increased frequency of internal contacts. These regions, with the average size of ∼1 Mb, were named topological domains. More recent studies demonstrated presence of unconstrained supercoiling in interphase chromosomes. Using Brownian dynamics simulations, we show here that by including supercoiling into models of topological domains one can reproduce and thus provide possible explanations of several experimentally observed characteristics of interphase chromosomes, such as their complex contact maps.
Liming in Agricultural Production Models with and Without the Adoption of Crop-Livestock Integration
Resumo:
ABSTRACT Perennial forage crops used in crop-livestock integration (CLI) are able to accumulate large amounts of straw on the soil surface in no-tillage system (NTS). In addition, they can potentially produce large amounts of soluble organic compounds that help improving the efficiency of liming in the subsurface, which favors root growth, thus reducing the risks of loss in yield during dry spells and the harmful effects of “overliming”. The aim of this study was to test the effects of liming on two models of agricultural production, with and without crop-livestock integration, for 2 years. Thus, an experiment was conducted in a Latossolo Vermelho (Oxisol) with a very clayey texture located in an agricultural area under the NTS in Bandeirantes, PR, Brazil. Liming was performed to increase base saturation (V) to 65, 75, and 90 % while one plot per block was maintained without the application of lime (control). A randomized block experimental design was adopted arranged in split-plots and four plots/block, with four replications. The soil properties evaluated were: pH in CaCl2, soil organic matter (SOM), Ca, Mg, K, Al, and P. The effects of liming were observed to a greater depth and for a long period through mobilization of ions in the soil, leading to a reduction in SOM and Al concentration and an increase in pH and the levels of Ca and Mg. In the first crop year, adoption of CLI led to an increase in the levels of K and Mg and a reduction in the levels of SOM; however, in the second crop year, the rate of decline of SOM decreased compared to the decline observed in the first crop year, and the level of K increased, whereas that of P decreased. The extent of the effects of liming in terms of depth and improvement in the root environment from the treatments were observed only partially from the changes observed in the chemical properties studied.
Integrating species distribution models (SDMs) and phylogeography for two species of Alpine Primula.
Resumo:
The major intention of the present study was to investigate whether an approach combining the use of niche-based palaeodistribution modeling and phylo-geography would support or modify hypotheses about the Quaternary distributional history derived from phylogeographic methods alone. Our study system comprised two closely related species of Alpine Primula. We used species distribution models based on the extant distribution of the species and last glacial maximum (LGM) climate models to predict the distribution of the two species during the LGM. Phylogeographic data were generated using amplified fragment length polymorphisms (AFLPs). In Primula hirsuta, models of past distribution and phylogeographic data are partly congruent and support the hypothesis of widespread nunatak survival in the Central Alps. Species distribution models (SDMs) allowed us to differentiate between alpine regions that harbor potential nunatak areas and regions that have been colonized from other areas. SDMs revealed that diversity is a good indicator for nunataks, while rarity is a good indicator for peripheral relict populations that were not source for the recolonization of the inner Alps. In P. daonensis, palaeo-distribution models and phylogeographic data are incongruent. Besides the uncertainty inherent to this type of modeling approach (e.g., relatively coarse 1-km grain size), disagreement of models and data may partly be caused by shifts of ecological niche in both species. Nevertheless, we demonstrate that the combination of palaeo-distribution modeling with phylogeographical approaches provides a more differentiated picture of the distributional history of species and partly supports (P. hirsuta) and partly modifies (P. daonensis and P. hirsuta) hypotheses of Quaternary distributional history. Some of the refugial area indicated by palaeodistribution models could not have been identified with phylogeographic data.
Resumo:
The objective of this work was to develop neural network models of backpropagation type to estimate solar radiation based on extraterrestrial radiation data, daily temperature range, precipitation, cloudiness and relative sunshine duration. Data from Córdoba, Argentina, were used for development and validation. The behaviour and adjustment between values observed and estimates obtained by neural networks for different combinations of input were assessed. These estimations showed root mean square error between 3.15 and 3.88 MJ m-2 d-1 . The latter corresponds to the model that calculates radiation using only precipitation and daily temperature range. In all models, results show good adjustment to seasonal solar radiation. These results allow inferring the adequate performance and pertinence of this methodology to estimate complex phenomena, such as solar radiation.
Resumo:
Although hydrocarbon-bearing fluids have been known from the alkaline igneous rocks of the Khibiny intrusion for many years, their origin remains enigmatic. A recently proposed model of post-magmatic hydrocarbon (HC) generation through Fischer-Tropsch (FT) type reactions suggests the hydration of Fe-bearing phases and release of H-2 which reacts with magmatically derived CO2 to form CH4 and higher HCs. However, new petrographic, microthermometric, laser Raman, bulk gas and isotope data are presented and discussed in the context of previously published work in order to reassess models of HC generation. The gas phase is dominated by CH4 with only minor proportions of higher hydrocarbons. No remnants of the proposed primary CO2-rich fluid are found in the complex. The majority of the fluid inclusions are of secondary nature and trapped in healed microfractures. This indicates a high fluid flux after magma crystallisation. Entrapment conditions for fluid inclusions are 450-550 degrees C at 2.8-4.5 kbar. These temperatures are too high for hydrocarbon gas generation through the FT reaction. Chemical analyses of rims of Fe-rich phases suggest that they are not the result of alteration but instead represent changes in magma composition during crystallisation. Furthermore, there is no clear relationship between the presence of Fe-rich minerals and the abundance of fluid inclusion planes (FIPs) as reported elsewhere. delta C-13 values for methane range from -22.4% to -5.4%, confirming a largely abiogenic origin for the gas. The presence of primary CH4-dominated fluid inclusions and melt inclusions, which contain a methane-rich gas phase, indicates a magmatic origin of the HCs. An increase in methane content, together with a decrease in delta C-13 isotope values towards the intrusion margin suggests that magmatically derived abiogenic hydrocarbons may have mixed with biogenic hydrocarbons derived from the surrounding country rocks. (C) 2006 Elsevier BV. All rights reserved.
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.