971 resultados para Convergence model
Resumo:
This paper presents a social simulation in which we add an additional layer of mass media communication to the social network 'bounded confidence' model of Deffuant et al (2000). A population of agents on a lattice with continuous opinions and bounded confidence adjust their opinions on the basis of binary social network interactions between neighbours or communication with a fixed opinion. There are two mechanisms for interaction. 'Social interaction' occurs between neighbours on a lattice and 'mass communication' adjusts opinions based on an agent interacting with a fixed opinion. Two new variables are added, polarisation: the degree to which two mass media opinions differ, and broadcast ratio: the number of social interactions for each mass media communication. Four dynamical regimes are observed, fragmented, double extreme convergence, a state of persistent opinion exchange leading to single extreme convergence and a disordered state. Double extreme convergence is found where agents are less willing to change opinion and mass media communications are common or where there is moderate willingness to change opinion and a high frequency of mass media communications. Single extreme convergence is found where there is moderate willingness to change opinion and a lower frequency of mass media communication. A period of persistent opinion exchange precedes single extreme convergence, it is characterized by the formation of two opposing groups of opinion separated by a gradient of opinion exchange. With even very low frequencies of mass media communications this results in a move to central opinions followed by a global drift to one extreme as one of the opposing groups of opinion dominates. A similar pattern of findings is observed for Neumann and Moore neighbourhoods.
Resumo:
Thermonuclear explosions may arise in binary star systems in which a carbon-oxygen (CO) white dwarf (WD) accretes helium-rich material from a companion star. If the accretion rate allows a sufficiently large mass of helium to accumulate prior to ignition of nuclear burning, the helium surface layer may detonate, giving rise to an astrophysical transient. Detonation of the accreted helium layer generates shock waves that propagate into the underlying CO WD. This might directly ignite a detonation of the CO WD at its surface (an edge-lit secondary detonation) or compress the core of the WD sufficiently to trigger a CO detonation near the centre. If either of these ignition mechanisms works, the two detonations (helium and CO) can then release sufficient energy to completely unbind the WD. These 'double-detonation' scenarios for thermonuclear explosion of WDs have previously been investigated as a potential channel for the production of Type Ia supernovae from WDs of ~ 1 M . Here we extend our 2D studies of the double-detonation model to significantly less massive CO WDs, the explosion of which could produce fainter, more rapidly evolving transients. We investigate the feasibility of triggering a secondary core detonation by shock convergence in low-mass CO WDs and the observable consequences of such a detonation. Our results suggest that core detonation is probable, even for the lowest CO core masses that are likely to be realized in nature. To quantify the observable signatures of core detonation, we compute spectra and light curves for models in which either an edge-lit or compression-triggered CO detonation is assumed to occur. We compare these to synthetic observables for models in which no CO detonation was allowed to occur. If significant shock compression of the CO WD occurs prior to detonation, explosion of the CO WD can produce a sufficiently large mass of radioactive iron-group nuclei to significantly affect the light curves. In particular, this can lead to relatively slow post-maximum decline. If the secondary detonation is edge-lit, however, the CO WD explosion primarily yields intermediate-mass elements that affect the observables more subtly. In this case, near-infrared observations and detailed spectroscopic analysis would be needed to determine whether a core detonation occurred. We comment on the implications of our results for understanding peculiar astrophysical transients including SN 2002bj, SN 2010X and SN 2005E. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.
Resumo:
Community structure depends on both deterministic and stochastic processes. However, patterns of community dissimilarity (e.g. difference in species composition) are difficult to interpret in terms of the relative roles of these processes. Local communities can be more dissimilar (divergence) than, less dissimilar (convergence) than, or as dissimilar as a hypothetical control based on either null or neutral models. However, several mechanisms may result in the same pattern, or act concurrently to generate a pattern, and much research has recently been focusing on unravelling these mechanisms and their relative contributions. Using a simulation approach, we addressed the effect of a complex but realistic spatial structure in the distribution of the niche axis and we analysed patterns of species co-occurrence and beta diversity as measured by dissimilarity indices (e.g. Jaccard index) using either expectations under a null model or neutral dynamics (i.e., based on switching off the niche effect). The strength of niche processes, dispersal, and environmental noise strongly interacted so that niche-driven dynamics may result in local communities that either diverge or converge depending on the combination of these factors. Thus, a fundamental result is that, in real systems, interacting processes of community assembly can be disentangled only by measuring traits such as niche breadth and dispersal. The ability to detect the signal of the niche was also dependent on the spatial resolution of the sampling strategy, which must account for the multiple scale spatial patterns in the niche axis. Notably, some of the patterns we observed correspond to patterns of community dissimilarities previously observed in the field and suggest mechanistic explanations for them or the data required to solve them. Our framework offers a synthesis of the patterns of community dissimilarity produced by the interaction of deterministic and stochastic determinants of community assembly in a spatially explicit and complex context.
Resumo:
In spite of the controversy that they have generated, neutral models provide ecologists with powerful tools for creating dynamic predictions about beta-diversity in ecological communities. Ecologists can achieve an understanding of the assembly rules operating in nature by noting when and how these predictions are met or not met. This is particularly valuable for those groups of organisms that are challenging to study under natural conditions (e.g., bacteria and fungi). Here, we focused on arbuscular mycorrhizal fungal (AMF) communities and performed an extensive literature search that allowed us to synthesize the information in 19 data sets with the minimal requisites for creating a null hypothesis in terms of community dissimilarity expected under neutral dynamics. In order to achieve this task, we calculated the first estimates of neutral parameters for several AMF communities from different ecosystems. Communities were shown either to be consistent with neutrality or to diverge or converge with respect to the levels of compositional dissimilarity expected under neutrality. These data support the hypothesis that divergence occurs in systems where the effect of limited dispersal is overwhelmed by anthropogenic disturbance or extreme biological and environmental heterogeneity, whereas communities converge when systems have the potential for niche divergence within a relatively homogeneous set of environmental conditions. Regarding the study cases that were consistent with neutrality, the sampling designs employed may have covered relatively homogeneous environments in which the effects of dispersal limitation overwhelmed minor differences among AMF taxa that would lead to environmental filtering. Using neutral models we showed for the first time for a soil microbial group the conditions under which different assembly processes may determine different patterns of beta-diversity. Our synthesis is an important step showing how the application of general ecological theories to a model microbial taxon has the potential to shed light on the assembly and ecological dynamics of communities.
Resumo:
Discusses the amendments to the Polish Competition Act 2007 adopted in June 2014 which aim to enhance the effectiveness of antitrust enforcement, including the introduction of: (1) civil fines for individuals; (2) a "leniency plus" programme based on the US model; (3) a settlement procedure; and (4) extended inspection powers for the Competition Authority. Assesses the likely effectiveness of the reforms.
Resumo:
Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells.
Resumo:
This research examines media integration in China, choosing two Chinese newspaper groups as cases for comparative study. The study analyses the convergence strategies of these Chinese groups by reference to an Role Model of convergence developed from a literature review of studies of cases of media convergence in the UK – in particular the Guardian (GNM), Telegraph Media Group (TMG), the Daily Mail and the Times. UK cases serve to establish the characteristics, causes and consequences of different forms of convergence and formulate a model of convergence. The model will specify the levels of newsroom convergence and the sub-units of analysis which will be used to collect empirical data from Chinese News Organisations and compare their strategies, practices and results with the UK experience. The literature review shows that there is a need for more comparative studies of media convergence strategy in general, and particularly in relation to Chinese media. Therefore, the study will address a gap in the understanding of media convergence in China. For this reason, my innovations have three folds: Firstly, to develop a new and comprehensive model of media convergence and a detailed understanding of the reasons why media companies pursue differing strategies in managing convergence across a wide range of units of analysis. Secondly, this study tries to compare the multimedia strategies of media groups under radically different political systems. Since, there is no standard research method or systematic theoretical framework for the study of Newsroom Convergence, this study develops an integrated perspective. The research will use the triangulation analysis of textual, field observation and interviews to explain systematically what was the newsroom structure like in the past and how did the copy flow change and why. Finally, this case study of media groups can provide an industrial model or framework for the other media groups.
Resumo:
Dragonflies show unique and superior flight performances than most of other insect species and birds. They are equipped with two pairs of independently controlled wings granting an unmatchable flying performance and robustness. In this paper, it is presented an adaptive scheme controlling a nonlinear model inspired in a dragonfly-like robot. It is proposed a hybrid adaptive (HA) law for adjusting the parameters analyzing the tracking error. At the current stage of the project it is considered essential the development of computational simulation models based in the dynamics to test whether strategies or algorithms of control, parts of the system (such as different wing configurations, tail) as well as the complete system. The performance analysis proves the superiority of the HA law over the direct adaptive (DA) method in terms of faster and improved tracking and parameter convergence.
Resumo:
To what extent should public utilities regulation be expected to converge across countries? When it occurs, will it generate good outcomes? Building on the core proposition of the New Institutional Economics that similar regulations generate different outcomes depending on their fit with the underlying domestic institutions, we develop a simple model and explore its implications by examining the diffusion of local loop unbundling (LLU) regulations. We argue that: one should expect some convergence in public utility regulation but with still a significant degree of local experimentation; this process will have very different impacts of regulation.
Resumo:
This thesis tested a path model of the relationships of reasons for drinking and reasons for limiting drinking with consumption of alcohol and drinking problems. It was hypothesized that reasons for drinking would be composed of positively and negatively reinforcing reasons, and that reasons for limiting drinking would be composed of personal and social reasons. Problem drinking was operationalized as consisting of two factors, consumption and drinking problems, with a positive relationship between the two. It was predicted that positively and negatively reinforcing reasons for drinking would be associated with heavier consumption and, in turn, more drinking problems, through level of consumption. Negatively reinforcing reasons were also predicted to be associated with drinking problems directly, independent of level of consumption. It was hypothesized that reasons for limiting drinking would be associated with lower levels of consumption and would be related to fewer drinking problems, through level of consumption. Finally, among women, reasons for limiting drinking were expected to be associated with drinking problems directly, independent of level of consumption. The sample, was taken from the second phase of the Niagara Young Aduh Health Study, a community sample of young adult men and women. Measurement models of reasons for drinking, reasons for limiting drinking, and problem drinking were tested using Confirmatory Factor Analysis. After adequate fit of each measurement model was obtained, the complete structural model, with all hypothesized paths, was tested for goodness of fit. Cross-group equality constraints were imposed on all models to test for gender differences. The results provided evidence supporting the hypothesized structure of reasons for drinking and problem drinking. A single factor model of reasons for limiting drinking was used in the analyses because a two-factor model was inadequate. Support was obtained for the structural model. For example, the resuhs revealed independent influences of Positively Reinforcing Reasons for Drinking, Negatively Reinforcing Reasons for Drinking, and Reasons for Limiting Drinking on consumption. In addition. Negatively Reinforcing Reasons helped to account for Drinking Problems independent of the amount of alcohol consumed. Although an additional path from Reasons for Limiting Drinking to Drinking Problems was hypothesized for women, it was of marginal significance and did not improve the model's fit. As a result, no sex differences in the model were found. This may be a result of the convergence of drinking patterns for men and women. Furthermore, it is suggested that gender differences may only be found in clinical samples of problem drinkers, where the relative level of consumption for women and men is similar.
Resumo:
Responding to a series of articles in sport management literature calling for more diversity in terms of areas of interest or methods, this study warns against the danger of excessively fragmenting this field of research. The works of Kuhn (1962) and Pfeffer (1993) are taken as the basis of an argument that connects convergence with scientific strength. However, being aware of the large number of counterarguments directed at this line of reasoning, a new model of convergence, which focuses on clusters of research contributions with similar areas of interest, methods, and concepts, is proposed. The existence of these clusters is determined with the help of a bibliometric analysis of publications in three sport management journals. This examination determines that there are justified reasons to be concerned about the level of convergence in the field, pointing out to a reduced ability to create large clusters of contributions in similar areas of interest.
Resumo:
The relative stability of aggregate labor's share constitutes one of the great macroeconomic ratios. However, relative stability at the aggregate level masks the unbalanced nature of industry labor's shares – the Kuznets stylized facts underlie those of Kaldor. We present a two-sector – one labor-only and the other using both capital and labor – model of unbalanced economic development with induced innovation that can rationalize these phenomena as well as several other empirical regularities of actual economies. Specifically, the model features (i) one sector ("goods" production) becoming increasingly capital-intensive over time; (ii) an increasing relative price and share in total output of the labor-only sector ("services"); and (iii) diverging sectoral labor's shares despite (iii) an aggregate labor's share that converges from above to a value between 0 and unity. Furthermore, the model (iv) supports either a neoclassical steadystate or long-run endogenous growth, giving it the potential to account for a wide range of real world development experiences.
Resumo:
Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the 3-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortex to their possible extra-tropical transition. Results have been compared with reanalyses (ERA40 and JRA25) and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to ENSO with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the center over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific SST anomalies. The overestimation in 2 the North Indian Ocean is likely to be due to an over prediction in the intensity of monsoon depressions, which are then classified as intense tropical storms. Nevertheless, overall results are encouraging and will further contribute to increased confidence in simulating intense tropical storms with high-resolution climate models.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.