909 resultados para models of communication
Resumo:
Two predictive models are developed in this article: the first is designed to predict people's attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The questionnaire used obtained information on participants' alcohol use, attitudes and personal values. The results show that the attitudes model correctly classifies 76.3% of cases. Likewise, the model for level of alcohol use correctly classifies 82% of cases. According to our results, we can conclude that there are a series of individual values that influence drinking and attitudes to alcohol use, which therefore provides us with a potentially powerful instrument for developing preventive intervention programs.
Resumo:
In an open system, each disequilibrium causes a force. Each force causes a flow process, these being represented by a flow variable formally written as an equation called flow equation, and if each flow tends to equilibrate the system, these equations mathematically represent the tendency to that equilibrium. In this paper, the authors, based on the concepts of forces and conjugated fluxes and dissipation function developed by Onsager and Prigogine, they expose the following hypothesis: Is replaced in Prigogine’s Theorem the flow by its equation or by a flow orbital considering conjugate force as a gradient. This allows to obtain a dissipation function for each flow equation and a function of orbital dissipation.
Resumo:
Dual-phase-lagging (DPL) models constitute a family of non-Fourier models of heat conduction that allow for the presence of time lags in the heat flux and the temperature gradient. These lags may need to be considered when modeling microscale heat transfer, and thus DPL models have found application in the last years in a wide range of theoretical and technical heat transfer problems. Consequently, analytical solutions and methods for computing numerical approximations have been proposed for particular DPL models in different settings. In this work, a compact difference scheme for second order DPL models is developed, providing higher order precision than a previously proposed method. The scheme is shown to be unconditionally stable and convergent, and its accuracy is illustrated with numerical examples.
Resumo:
In this paper, the authors extend and generalize the methodology based on the dynamics of systems with the use of differential equations as equations of state, allowing that first order transformed functions not only apply to the primitive or original variables, but also doing so to more complex expressions derived from them, and extending the rules that determine the generation of transformed superior to zero order (variable or primitive). Also, it is demonstrated that for all models of complex reality, there exists a complex model from the syntactic and semantic point of view. The theory is exemplified with a concrete model: MARIOLA model.
Resumo:
no.2(1922)
Resumo:
From the mid-1980s on a new attitude towards self-determination appeared in Western European integration. With the Maastricht Treaty of 1992 and, later, with theAmsterdam Treaty of 1997 the member countries of the European Community manifested their determination to be active players in the new international order. Accepting and instituting the single market and monetary union constituted, however, a challenge of compatibility between the traditional model of welfare European capitalism and the impositions coming from globalization under the neo-liberal model of Anglo-Saxon capitalism. This issue is examined here under two perspectives. The first reviews the implications which globalization has had on the European model of capitalism and the second the complications for monetary management as Europe moves from a nationally regulated to a union regulated financial structure.
Resumo:
I will start by discussing some aspects of Kagitcibasi’s Theory of Family Change: its current empirical status and, more importantly, its focus on universal human needs and the consequences of this focus. Family Change Theory’s focus on the universality of the basic human needs of autonomy and relatedness and its culture-level emphasis on cultural norms and family values as reflecting a culture’s capacity for fulfilling its members’ respective needs shows that the theory advocates balanced cultural norms of independence and interdependence. As a normative theory it therefore postulates the necessity of a synthetic family model of emotional interdependence as an alternative to extreme models of total independence and total interdependence. Generalizing from this I will sketch a theoretical model where a dynamic and dialectical process of the fit between individual and culture and between culture and universal human needs and related social practices is central. I will discuss this model using a recent cross-cultural project on implicit theories of self/world and primary/secondary control orientations as an example. Implications for migrating families and acculturating individuals are also discussed.
Resumo:
One to two percent of all children are born with a developmental disorder requiring pediatric hospital admissions. For many such syndromes, the molecular pathogenesis remains poorly characterized. Parallel developmental disorders in other species could provide complementary models for human rare diseases by uncovering new candidate genes, improving the understanding of the molecular mechanisms and opening possibilities for therapeutic trials. We performed various experiments, e.g. combined genome-wide association and next generation sequencing, to investigate the clinico-pathological features and genetic causes of three developmental syndromes in dogs, including craniomandibular osteopathy (CMO), a previously undescribed skeletal syndrome, and dental hypomineralization, for which we identified pathogenic variants in the canine SLC37A2 (truncating splicing enhancer variant), SCARF2 (truncating 2-bp deletion) and FAM20C (missense variant) genes, respectively. CMO is a clinical equivalent to an infantile cortical hyperostosis (Caffey disease), for which SLC37A2 is a new candidate gene. SLC37A2 is a poorly characterized member of a glucose-phosphate transporter family without previous disease associations. It is expressed in many tissues, including cells of the macrophage lineage, e.g. osteoclasts, and suggests a disease mechanism, in which an impaired glucose homeostasis in osteoclasts compromises their function in the developing bone, leading to hyperostosis. Mutations in SCARF2 and FAM20C have been associated with the human van den Ende-Gupta and Raine syndromes that include numerous features similar to the affected dogs. Given the growing interest in the molecular characterization and treatment of human rare diseases, our study presents three novel physiologically relevant models for further research and therapy approaches, while providing the molecular identity for the canine conditions.
Resumo:
We report quantitative results from three brittle thrust wedge experiments, comparing numerical results directly with each other and with corresponding analogue results. We first test whether the participating codes reproduce predictions from analytical critical taper theory. Eleven codes pass the stable wedge test, showing negligible internal deformation and maintaining the initial surface slope upon horizontal translation over a frictional interface. Eight codes participated in the unstable wedge test that examines the evolution of a wedge by thrust formation from a subcritical state to the critical taper geometry. The critical taper is recovered, but the models show two deformation modes characterised by either mainly forward dipping thrusts or a series of thrust pop-ups. We speculate that the two modes are caused by differences in effective basal boundary friction related to different algorithms for modelling boundary friction. The third experiment examines stacking of forward thrusts that are translated upward along a backward thrust. The results of the seven codes that run this experiment show variability in deformation style, number of thrusts, thrust dip angles and surface slope. Overall, our experiments show that numerical models run with different numerical techniques can successfully simulate laboratory brittle thrust wedge models at the cm-scale. In more detail, however, we find that it is challenging to reproduce sandbox-type setups numerically, because of frictional boundary conditions and velocity discontinuities. We recommend that future numerical-analogue comparisons use simple boundary conditions and that the numerical Earth Science community defines a plasticity test to resolve the variability in model shear zones.
Resumo:
We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing, and pop-up width from model to nature.
Resumo:
Analogue and finite element numerical models with frictional and viscous properties are used to model thrust wedge development. Comparison between model types yields valuable information about analogue model evolution, scaling laws and the relative strengths and limitations of the techniques. Both model types show a marked contrast in structural style between ‘frictional-viscous domains’ underlain by a thin viscous layer and purely ‘frictional domains’. Closely spaced thrusts form a narrow and highly asymmetric fold-and-thrust belt in the frictional domain, characterized by in-sequence propagation of forward thrusts. In contrast, the frictional-viscous domain shows a wide and low taper wedge and a thrust belt with a more symmetrical vergence, with both forward and back thrusts. The frictional-viscous domain numerical models show that the viscous layer initially simple shears as deformation propagates along it, while localized deformation resulting in the formation of a pop-up structure occurs in the overlying frictional layers. In both domains, thrust shear zones in the numerical model are generally steeper than the equivalent faults in the analogue model, because the finite element code uses a non-associated plasticity flow law. Nevertheless, the qualitative agreement between analogue and numerical models is encouraging. It shows that the continuum approximation used in numerical models can be used to model frictional materials, such as sand, provided caution is taken to properly scale the experiments, and some of the limitations are taken into account.
Resumo:
A statistical analysis ol 15 deep sea cores in the eastern North Atlantic off NW Africa revealed the typical fluctuation pattern of distinct species proups as has been described from various parts of the world ocean. Only the "WBF-group" appears to be correlated with global climatic changes, i.e. warmer periods as the Eemian and the Atlanticum. A partly antagonistic "High Productivity group" (HPR-group) is in general not linked with global changes but times of increased fertility in the surface water and the resulting flux of organic matter reaching the bottom. The groups were extracted from cluster analysis of more than 150 surface samples (HPR-group) and a factor analysis of selected cores (WBF-group). In contrast to previous studies the observed fluctuations can not be explained by drastic changes in bottom water masses, but by the pulsation of a distinct "High Productivity Patch" in space and time. At present, this patch is located below the well known upwelling area between 22° and 12° northern latitude. It shifted to the north (up to 27 °N) during the latest glacial period ( 18 ky), indicating an equivalent shift of upwelling productivity caused by advection of nutrient rich upwelling SACW-waters, probably during most of isotopic stages 2 and 3.