917 resultados para Models of organization
Resumo:
The present research focused on the pathways through which the symptoms of posttraumatic stress disorder (PTSD) may negatively impact intimacy. Previous research has confirmed a link between self-reported PTSD symptoms and intimacy; however, a thorough examination of mediating paths, partner effects, and secondary traumatization has not yet been realized. With a sample of 297 heterosexual couples, intraindividual and dyadic models were developed to explain the relationships between PTSD symptoms and intimacy in the context of interdependence theory, attachment theory, and models of selfpreservation (e.g., fight-or-flight). The current study replicated the findings of others and has supported a process in which affective (alexithymia, negative affect, positive affect) and communication (demand-withdraw behaviour, self-concealment, and constructive communication) pathways mediate the intraindividual and dyadic relationships between PTSD symptoms and intimacy. Moreover, it also found that the PTSD symptoms of each partner were significantly related; however, this was only the case for those dyads in which the partners had disclosed most everything about their traumatic experiences. As such, secondary traumatization was supported. Finally, although the overall pattern of results suggest a total negative effect of PTSD symptoms on intimacy, a sex difference was evident such that the direct effect of the woman's PTSD symptoms were positively associated with both her and her partner's intimacy. I t is possible that the Tend-andBefriend model of threat response, wherein women are said to foster social bonds in the face of distress, may account for this sex difference. Overall, however, it is clear that PTSD symptoms were negatively associated with relationship quality and attention to this impact in the development of diagnostic criteria and treatment protocols is necessary.
Resumo:
Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.
Object-Oriented Genetic Programming for the Automatic Inference of Graph Models for Complex Networks
Resumo:
Complex networks are systems of entities that are interconnected through meaningful relationships. The result of the relations between entities forms a structure that has a statistical complexity that is not formed by random chance. In the study of complex networks, many graph models have been proposed to model the behaviours observed. However, constructing graph models manually is tedious and problematic. Many of the models proposed in the literature have been cited as having inaccuracies with respect to the complex networks they represent. However, recently, an approach that automates the inference of graph models was proposed by Bailey [10] The proposed methodology employs genetic programming (GP) to produce graph models that approximate various properties of an exemplary graph of a targeted complex network. However, there is a great deal already known about complex networks, in general, and often specific knowledge is held about the network being modelled. The knowledge, albeit incomplete, is important in constructing a graph model. However it is difficult to incorporate such knowledge using existing GP techniques. Thus, this thesis proposes a novel GP system which can incorporate incomplete expert knowledge that assists in the evolution of a graph model. Inspired by existing graph models, an abstract graph model was developed to serve as an embryo for inferring graph models of some complex networks. The GP system and abstract model were used to reproduce well-known graph models. The results indicated that the system was able to evolve models that produced networks that had structural similarities to the networks generated by the respective target models.
Resumo:
This paper proposes an explanation for why efficient reforms are not carried out when losers have the power to block their implementation, even though compensating them is feasible. We construct a signaling model with two-sided incomplete information in which a government faces the task of sequentially implementing two reforms by bargaining with interest groups. The organization of interest groups is endogenous. Compensations are distortionary and government types differ in the concern about distortions. We show that, when compensations are allowed to be informative about the government’s type, there is a bias against the payment of compensations and the implementation of reforms. This is because paying high compensations today provides incentives for some interest groups to organize and oppose subsequent reforms with the only purpose of receiving a transfer. By paying lower compensations, governments attempt to prevent such interest groups from organizing. However, this comes at the cost of reforms being blocked by interest groups with relatively high losses.
Resumo:
This paper examines the use of bundling by a firm that sells in two national markets and faces entry by parallel traders. The firm can bundle its main product, - a tradable good- with a non-traded service. It chooses between the strategies of pure bundling, mixed bundling and no bundling. The paper shows that in the low-price country the threat of grey trade elicits a move from mixed bundling, or no bundling, towards pure bundling. It encourages a move from pure bundling towards mixes bundling or no bundling in the high-price country. The set of parameter values for which the profit maximizing strategy is not to supply the low price country is smaller than in the absence of bundling. The welfare effects of deterrence of grey trade are not those found in conventional models of price arbitrage. Some consumers in the low-price country may gain from the threat of entry by parallel traders although they pay a higher price. This is due to the fact that the firm responds to the threat of arbitrageurs by increasing the amount of services it puts in the bundle targeted at consumers in that country. Similarly, the threat of parallel trade may affect some consumers in the hight-price country adversely.
Resumo:
La formation des sociétés fondées sur la connaissance, le progrès de la technologie de communications et un meilleur échange d'informations au niveau mondial permet une meilleure utilisation des connaissances produites lors des décisions prises dans le système de santé. Dans des pays en voie de développement, quelques études sont menées sur des obstacles qui empêchent la prise des décisions fondées sur des preuves (PDFDP) alors que des études similaires dans le monde développé sont vraiment rares. L'Iran est le pays qui a connu la plus forte croissance dans les publications scientifiques au cours de ces dernières années, mais la question qui se pose est la suivante : quels sont les obstacles qui empêchent l'utilisation de ces connaissances de même que celle des données mondiales? Cette étude embrasse trois articles consécutifs. Le but du premier article a été de trouver un modèle pour évaluer l'état de l'utilisation des connaissances dans ces circonstances en Iran à l’aide d'un examen vaste et systématique des sources suivie par une étude qualitative basée sur la méthode de la Grounded Theory. Ensuite au cours du deuxième et troisième article, les obstacles aux décisions fondées sur des preuves en Iran, sont étudiés en interrogeant les directeurs, les décideurs du secteur de la santé et les chercheurs qui travaillent à produire des preuves scientifiques pour la PDFDP en Iran. Après avoir examiné les modèles disponibles existants et la réalisation d'une étude qualitative, le premier article est sorti sous le titre de «Conception d'un modèle d'application des connaissances». Ce premier article sert de cadre pour les deux autres articles qui évaluent les obstacles à «pull» et «push» pour des PDFDP dans le pays. En Iran, en tant que pays en développement, les problèmes se situent dans toutes les étapes du processus de production, de partage et d’utilisation de la preuve dans la prise de décision du système de santé. Les obstacles qui existent à la prise de décision fondée sur des preuves sont divers et cela aux différents niveaux; les solutions multi-dimensionnelles sont nécessaires pour renforcer l'impact de preuves scientifiques sur les prises de décision. Ces solutions devraient entraîner des changements dans la culture et le milieu de la prise de décision afin de valoriser la prise de décisions fondées sur des preuves. Les critères de sélection des gestionnaires et leur nomination inappropriée ainsi que leurs remplaçants rapides et les différences de paiement dans les secteurs public et privé peuvent affaiblir la PDFDP de deux façons : d’une part en influant sur la motivation des décideurs et d'autre part en détruisant la continuité du programme. De même, tandis que la sélection et le remplacement des chercheurs n'est pas comme ceux des gestionnaires, il n'y a aucun critère pour encourager ces deux groupes à soutenir le processus décisionnel fondés sur des preuves dans le secteur de la santé et les changements ultérieurs. La sélection et la promotion des décideurs politiques devraient être basées sur leur performance en matière de la PDFDP et les efforts des universitaires doivent être comptés lors de leurs promotions personnelles et celles du rang de leur institution. Les attitudes et les capacités des décideurs et des chercheurs devraient être encouragés en leur donnant assez de pouvoir et d’habiliter dans les différentes étapes du cycle de décision. Cette étude a révélé que les gestionnaires n'ont pas suffisamment accès à la fois aux preuves nationales et internationales. Réduire l’écart qui sépare les chercheurs des décideurs est une étape cruciale qui doit être réalisée en favorisant la communication réciproque. Cette question est très importante étant donné que l'utilisation des connaissances ne peut être renforcée que par l'étroite collaboration entre les décideurs politiques et le secteur de la recherche. Dans ce but des programmes à long terme doivent être conçus ; la création des réseaux de chercheurs et de décideurs pour le choix du sujet de recherche, le classement des priorités, et le fait de renforcer la confiance réciproque entre les chercheurs et les décideurs politiques semblent être efficace.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
Resumo:
In the midst of health care reform, Colombia has succeeded in increasing health insurance coverage and the quality of health care. In spite of this, efficiency continues to be a matter of concern, and small-area variations in health care are one of the plausible causes of such inefficiencies. In order to understand this issue, we use individual data of all births from a Contributory-Regimen insurer in Colombia. We perform two different specifications of a multilevel logistic regression model. Our results reveal that hospitals account for 20% of variation on the probability of performing cesarean sections. Geographic area only explains 1/3 of the variance attributable to the hospital. Furthermore, some variables from both demand and supply sides are found to be also relevant on the probability of undergoing cesarean sections. This paper contributes to previous research by using a hierarchical model and by defining hospitals as cluster. Moreover, we also include clinical and supply induced demand variables.
Resumo:
El interés de esta monografía es analizar las interacciones no-lineales con resultados emergentes que mantuvo la comunidad kurda en Siria, durante el periodo 2011-2014, y por las cuales se produjeron formas de auto-organización como resultado de la estructura compleja a la que pertenece. De esta forma, se explica cómo a raíz de la crisis política siria y los enfrentamientos con el Estado Islámico, se transformó el rol de los kurdos en Siria y se influenciaron las estructuras políticas del país y las naciones de la región con población kurda. Por lo tanto, esta investigación se propone analizar este fenómeno a través del enfoque de complejidad en Relaciones Internacionales y el concepto de Auto-Organización. A partir de ello, se indaga sobre las interacciones surgidas en estructuras más pequeñas, que habrían afectado un sistema mayor; estableciendo nuevas formas de organización que no pueden ser explicadas, únicamente, a partir de elementos causales.
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.
Resumo:
The Rio Tinto river in SW Spain is a classic example of acid mine drainage and the focus of an increasing amount of research including environmental geochemistry, extremophile microbiology and Mars-analogue studies. Its 5000-year mining legacy has resulted in a wide range of point inputs including spoil heaps and tunnels draining underground workings. The variety of inputs and importance of the river as a research site make it an ideal location for investigating sulphide oxidation mechanisms at the field scale. Mass balance calculations showed that pyrite oxidation accounts for over 93% of the dissolved sulphate derived from sulphide oxidation in the Rio Tinto point inputs. Oxygen isotopes in water and sulphate were analysed from a variety of drainage sources and displayed delta O-18((SO4-H2O)) values from 3.9 to 13.6 parts per thousand, indicating that different oxidation pathways occurred at different sites within the catchment. The most commonly used approach to interpreting field oxygen isotope data applies water and oxygen fractionation factors derived from laboratory experiments. We demonstrate that this approach cannot explain high delta O-18((SO4-H2O)) values in a manner that is consistent with recent models of pyrite and sulphoxyanion oxidation. In the Rio Tinto, high delta O-18((SO4-H2O)) values (11.2-13.6 parts per thousand) occur in concentrated (Fe = 172-829 mM), low pH (0.88-1.4), ferrous iron (68-91% of total Fe) waters and are most simply explained by a mechanism involving a dissolved sulphite intermediate, sulphite-water oxygen equilibrium exchange and finally sulphite oxidation to sulphate with O-2. In contrast, drainage from large waste blocks of acid volcanic tuff with pyritiferous veins also had low pH (1.7). but had a low delta O-18((SO4-H2O)) value of 4.0 parts per thousand and high concentrations of ferric iron (Fe(III) = 185 mM, total Fe = 186 mM), suggesting a pathway where ferric iron is the primary oxidant, water is the primary source of oxygen in the sulphate and where sulphate is released directly from the pyrite surface. However, problems remain with the sulphite-water oxygen exchange model and recommendations are therefore made for future experiments to refine our understanding of oxygen isotopes in pyrite oxidation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.
Resumo:
Thirty‐three snowpack models of varying complexity and purpose were evaluated across a wide range of hydrometeorological and forest canopy conditions at five Northern Hemisphere locations, for up to two winter snow seasons. Modeled estimates of snow water equivalent (SWE) or depth were compared to observations at forest and open sites at each location. Precipitation phase and duration of above‐freezing air temperatures are shown to be major influences on divergence and convergence of modeled estimates of the subcanopy snowpack. When models are considered collectively at all locations, comparisons with observations show that it is harder to model SWE at forested sites than open sites. There is no universal “best” model for all sites or locations, but comparison of the consistency of individual model performances relative to one another at different sites shows that there is less consistency at forest sites than open sites, and even less consistency between forest and open sites in the same year. A good performance by a model at a forest site is therefore unlikely to mean a good model performance by the same model at an open site (and vice versa). Calibration of models at forest sites provides lower errors than uncalibrated models at three out of four locations. However, benefits of calibration do not translate to subsequent years, and benefits gained by models calibrated for forest snow processes are not translated to open conditions.