902 resultados para Role models
Resumo:
In der psycholinguistischen Forschung ist die Annahme weitverbreitet, dass die Bewertung von Informationen hinsichtlich ihres Wahrheitsgehaltes oder ihrer Plausibilität (epistemische Validierung; Richter, Schroeder & Wöhrmann, 2009) ein strategischer, optionaler und dem Verstehen nachgeschalteter Prozess ist (z.B. Gilbert, 1991; Gilbert, Krull & Malone, 1990; Gilbert, Tafarodi & Malone, 1993; Herbert & Kübler, 2011). Eine zunehmende Anzahl an Studien stellt dieses Zwei-Stufen-Modell von Verstehen und Validieren jedoch direkt oder indirekt in Frage. Insbesondere Befunde zu Stroop-artigen Stimulus-Antwort-Kompatibilitätseffekten, die auftreten, wenn positive und negative Antworten orthogonal zum aufgaben-irrelevanten Wahrheitsgehalt von Sätzen abgegeben werden müssen (z.B. eine positive Antwort nach dem Lesen eines falschen Satzes oder eine negative Antwort nach dem Lesen eines wahren Satzes; epistemischer Stroop-Effekt, Richter et al., 2009), sprechen dafür, dass Leser/innen schon beim Verstehen eine nicht-strategische Überprüfung der Validität von Informationen vornehmen. Ausgehend von diesen Befunden war das Ziel dieser Dissertation eine weiterführende Überprüfung der Annahme, dass Verstehen einen nicht-strategischen, routinisierten, wissensbasierten Validierungsprozesses (epistemisches Monitoring; Richter et al., 2009) beinhaltet. Zu diesem Zweck wurden drei empirische Studien mit unterschiedlichen Schwerpunkten durchgeführt. Studie 1 diente der Untersuchung der Fragestellung, ob sich Belege für epistemisches Monitoring auch bei Informationen finden lassen, die nicht eindeutig wahr oder falsch, sondern lediglich mehr oder weniger plausibel sind. Mithilfe des epistemischen Stroop-Paradigmas von Richter et al. (2009) konnte ein Kompatibilitätseffekt von aufgaben-irrelevanter Plausibilität auf die Latenzen positiver und negativer Antworten in zwei unterschiedlichen experimentellen Aufgaben nachgewiesen werden, welcher dafür spricht, dass epistemisches Monitoring auch graduelle Unterschiede in der Übereinstimmung von Informationen mit dem Weltwissen berücksichtigt. Darüber hinaus belegen die Ergebnisse, dass der epistemische Stroop-Effekt tatsächlich auf Plausibilität und nicht etwa auf der unterschiedlichen Vorhersagbarkeit von plausiblen und unplausiblen Informationen beruht. Das Ziel von Studie 2 war die Prüfung der Hypothese, dass epistemisches Monitoring keinen evaluativen Mindset erfordert. Im Gegensatz zu den Befunden anderer Autoren (Wiswede, Koranyi, Müller, Langner, & Rothermund, 2013) zeigte sich in dieser Studie ein Kompatibilitätseffekt des aufgaben-irrelevanten Wahrheitsgehaltes auf die Antwortlatenzen in einer vollständig nicht-evaluativen Aufgabe. Die Ergebnisse legen nahe, dass epistemisches Monitoring nicht von einem evaluativen Mindset, möglicherweise aber von der Tiefe der Verarbeitung abhängig ist. Studie 3 beleuchtete das Verhältnis von Verstehen und Validieren anhand einer Untersuchung der Online-Effekte von Plausibilität und Vorhersagbarkeit auf Augenbewegungen beim Lesen kurzer Texte. Zusätzlich wurde die potentielle Modulierung dieser Effeke durch epistemische Marker, die die Sicherheit von Informationen anzeigen (z.B. sicherlich oder vielleicht), untersucht. Entsprechend der Annahme eines schnellen und nicht-strategischen epistemischen Monitoring-Prozesses zeigten sich interaktive Effekte von Plausibilität und dem Vorhandensein epistemischer Marker auf Indikatoren früher Verstehensprozesse. Dies spricht dafür, dass die kommunizierte Sicherheit von Informationen durch den Monitoring-Prozess berücksichtigt wird. Insgesamt sprechen die Befunde gegen eine Konzeptualisierung von Verstehen und Validieren als nicht-überlappenden Stufen der Informationsverarbeitung. Vielmehr scheint eine Bewertung des Wahrheitsgehalts oder der Plausibilität basierend auf dem Weltwissen – zumindest in gewissem Ausmaß – eine obligatorische und nicht-strategische Komponente des Sprachverstehens zu sein. Die Bedeutung der Befunde für aktuelle Modelle des Sprachverstehens und Empfehlungen für die weiterführende Forschung zum Vehältnis von Verstehen und Validieren werden aufgezeigt.
Resumo:
In the past decades since Schumpeter’s influential writings economists have pursued research to examine the role of innovation in certain industries on firm as well as on industry level. Researchers describe innovations as the main trigger of industry dynamics, while policy makers argue that research and education are directly linked to economic growth and welfare. Thus, research and education are an important objective of public policy. Firms and public research are regarded as the main actors which are relevant for the creation of new knowledge. This knowledge is finally brought to the market through innovations. What is more, policy makers support innovations. Both actors, i.e. policy makers and researchers, agree that innovation plays a central role but researchers still neglect the role that public policy plays in the field of industrial dynamics. Therefore, the main objective of this work is to learn more about the interdependencies of innovation, policy and public research in industrial dynamics. The overarching research question of this dissertation asks whether it is possible to analyze patterns of industry evolution – from evolution to co-evolution – based on empirical studies of the role of innovation, policy and public research in industrial dynamics. This work starts with a hypothesis-based investigation of traditional approaches of industrial dynamics. Namely, the testing of a basic assumption of the core models of industrial dynamics and the analysis of the evolutionary patterns – though with an industry which is driven by public policy as example. Subsequently it moves to a more explorative approach, investigating co-evolutionary processes. The underlying questions of the research include the following: Do large firms have an advantage because of their size which is attributable to cost spreading? Do firms that plan to grow have more innovations? What role does public policy play for the evolutionary patterns of an industry? Are the same evolutionary patterns observable as those described in the ILC theories? And is it possible to observe regional co-evolutionary processes of science, innovation and industry evolution? Based on two different empirical contexts – namely the laser and the photovoltaic industry – this dissertation tries to answer these questions and combines an evolutionary approach with a co-evolutionary approach. The first chapter starts with an introduction of the topic and the fields this dissertation is based on. The second chapter provides a new test of the Cohen and Klepper (1996) model of cost spreading, which explains the relationship between innovation, firm size and R&D, at the example of the photovoltaic industry in Germany. First, it is analyzed whether the cost spreading mechanism serves as an explanation for size advantages in this industry. This is related to the assumption that the incentives to invest in R&D increase with the ex-ante output. Furthermore, it is investigated whether firms that plan to grow will have more innovative activities. The results indicate that cost spreading serves as an explanation for size advantages in this industry and, furthermore, growth plans lead to higher amount of innovative activities. What is more, the role public policy plays for industry evolution is not finally analyzed in the field of industrial dynamics. In the case of Germany, the introduction of demand inducing policy instruments stimulated market and industry growth. While this policy immediately accelerated market volume, the effect on industry evolution is more ambiguous. Thus, chapter three analyzes this relationship by considering a model of industry evolution, where demand-inducing policies will be discussed as a possible trigger of development. The findings suggest that these instruments can take the same effect as a technical advance to foster the growth of an industry and its shakeout. The fourth chapter explores the regional co-evolution of firm population size, private-sector patenting and public research in the empirical context of German laser research and manufacturing over more than 40 years from the emergence of the industry to the mid-2000s. The qualitative as well as quantitative evidence is suggestive of a co-evolutionary process of mutual interdependence rather than a unidirectional effect of public research on private-sector activities. Chapter five concludes with a summary, the contribution of this work as well as the implications and an outlook of further possible research.
Resumo:
This report considers three case studies (namely diabetes, dementia and obesity) for setting up a framework to assess the systemic influences of technologies in the long-term care milieu, using a problem-driven approach in relation to health care. Such technologies could be an enabling factor or a catalyser of advances taking place in the health and social sectors. They offer opportunities to support and amplify relevant organisational changes in the context of innovative care models, which stem from overall policies and regulations of a national or regional jurisdiction to address the future sustainability of health and social care.
Resumo:
The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Resumo:
The global radiation balance of the atmosphere is still poorly observed, particularly at the surface. We investigate the observed radiation balance at (1) the surface using the ARM Mobile Facility in Niamey, Niger, and (2) the top of the atmosphere (TOA) over West Africa using data from the Geostationary Earth Radiation Budget (GERB) instrument on board Meteosat-8. Observed radiative fluxes are compared with predictions from the global numerical weather prediction (NWP) version of the Met Office Unified Model (MetUM). The evaluation points to major shortcomings in the NWP model's radiative fluxes during the dry season (December 2005 to April 2006) arising from (1) a lack of absorbing aerosol in the model (mineral dust and biomass burning aerosol) and (2) a poor specification of the surface albedo. A case study of the major Saharan dust outbreak of 6–12 March 2006 is used to evaluate a parameterization of mineral dust for use in the NWP models. The model shows good predictability of the large-scale flow out to 4–5 days with the dust parameterization providing reasonable dust uplift, spatial distribution, and temporal evolution for this strongly forced dust event. The direct radiative impact of the dust reduces net downward shortwave (SW) flux at the surface (TOA) by a maximum of 200 W m−2 (150 W m−2), with a SW heating of the atmospheric column. The impacts of dust on terrestrial radiation are smaller. Comparisons of TOA (surface) radiation balance with GERB (ARM) show the “dusty” forecasts reduce biases in the radiative fluxes and improve surface temperatures and vertical thermodynamic structure.
Resumo:
Our understanding of the climate system has been revolutionized recently, by the development of sophisticated computer models. The predictions of such models are used to formulate international protocols, intended to mitigate the severity of global warming and its impacts. Yet, these models are not perfect representations of reality, because they remove from explicit consideration many physical processes which are known to be key aspects of the climate system, but which are too small or fast to be modelled. The purpose of this paper is to give a personal perspective of the current state of knowledge regarding the problem of unresolved scales in climate models. A recent novel solution to the problem is discussed, in which it is proposed, somewhat counter-intuitively, that the performance of models may be improved by adding random noise to represent the unresolved processes.
Resumo:
Numerical studies of surface ocean fronts forced by inhomogeneous buoyancy loss show nonhydrostatic convective plumes coexisting with baroclinic eddies. The character of the vertical overturning depends sensitively on the treatment of the vertical momentum equation in the model. It is less well known how the frontal evolution over scales of O(10 km) is affected by these dynamics. Here, we compare highly resolved numerical experiments using nonhydrostatic and hydrostatic models and the convective-adjustment parametrization. The impact of nonhydrostatic processes on average cross-frontal transfer is weak compared to the effect of the O(1 km) scale baroclinic motions. For water-mass distribution and formation rate nonhydrostatic dynamics have similar influence to the baroclinic eddies although adequate resolution of the gradients in forcing fluxes is more important. The overall implication is that including nonhydrostatic surface frontal dynamics in ocean general circulation models will have only a minor effect on scales of O(1 km) and greater.
Resumo:
In this paper, the available potential energy (APE) framework of Winters et al. (J. Fluid Mech., vol. 289, 1995, p. 115) is extended to the fully compressible Navier– Stokes equations, with the aims of clarifying (i) the nature of the energy conversions taking place in turbulent thermally stratified fluids; and (ii) the role of surface buoyancy fluxes in the Munk & Wunsch (Deep-Sea Res., vol. 45, 1998, p. 1977) constraint on the mechanical energy sources of stirring required to maintain diapycnal mixing in the oceans. The new framework reveals that the observed turbulent rate of increase in the background gravitational potential energy GPEr , commonly thought to occur at the expense of the diffusively dissipated APE, actually occurs at the expense of internal energy, as in the laminar case. The APE dissipated by molecular diffusion, on the other hand, is found to be converted into internal energy (IE), similar to the viscously dissipated kinetic energy KE. Turbulent stirring, therefore, does not introduce a new APE/GPEr mechanical-to-mechanical energy conversion, but simply enhances the existing IE/GPEr conversion rate, in addition to enhancing the viscous dissipation and the entropy production rates. This, in turn, implies that molecular diffusion contributes to the dissipation of the available mechanical energy ME =APE +KE, along with viscous dissipation. This result has important implications for the interpretation of the concepts of mixing efficiency γmixing and flux Richardson number Rf , for which new physically based definitions are proposed and contrasted with previous definitions. The new framework allows for a more rigorous and general re-derivation from the first principles of Munk & Wunsch (1998, hereafter MW98)’s constraint, also valid for a non-Boussinesq ocean: G(KE) ≈ 1 − ξ Rf ξ Rf Wr, forcing = 1 + (1 − ξ )γmixing ξ γmixing Wr, forcing , where G(KE) is the work rate done by the mechanical forcing, Wr, forcing is the rate of loss of GPEr due to high-latitude cooling and ξ is a nonlinearity parameter such that ξ =1 for a linear equation of state (as considered by MW98), but ξ <1 otherwise. The most important result is that G(APE), the work rate done by the surface buoyancy fluxes, must be numerically as large as Wr, forcing and, therefore, as important as the mechanical forcing in stirring and driving the oceans. As a consequence, the overall mixing efficiency of the oceans is likely to be larger than the value γmixing =0.2 presently used, thereby possibly eliminating the apparent shortfall in mechanical stirring energy that results from using γmixing =0.2 in the above formula.
Resumo:
The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.
Resumo:
A systematic modular approach to investigate the respective roles of the ocean and atmosphere in setting El Niño characteristics in coupled general circulation models is presented. Several state-of-the-art coupled models sharing either the same atmosphere or the same ocean are compared. Major results include 1) the dominant role of the atmosphere model in setting El Niño characteristics (periodicity and base amplitude) and errors (regularity) and 2) the considerable improvement of simulated El Niño power spectra—toward lower frequency—when the atmosphere resolution is significantly increased. Likely reasons for such behavior are briefly discussed. It is argued that this new modular strategy represents a generic approach to identifying the source of both coupled mechanisms and model error and will provide a methodology for guiding model improvement.
Resumo:
The role of convective processes in moistening the atmosphere during suppressed periods of the suppressed phase of a Madden-Julian oscillation is investigated in cloud-resolving model (CRM) simulations, and the impact of moistening on the subsequent evolution of convection is assessed as part of a Global Energy and Water Cycle Experiment Cloud System Study (GCSS) intercomparison project. The ability of single-column model (SCM) versions of a number of state-of-the-art climate and numerical weather prediction models to capture these convective processes is also evaluated. During the suppressed periods, the CRMs are found to simulate a maximum moistening around 3 km, which is associated with a predominance of shallow convection. All SCMs produce adequate amounts of shallow convection during the suppressed periods, comparable to that seen in CRMs, but the relatively drier SCMs have higher precipitation rates than the relatively wetter SCMs and CRMs. The relatively drier SCMs dry, rather than moisten, the lower troposphere below the melting level. During the transition periods, convective processes act to moisten the atmosphere above the level at which mean advection changes from moistening to drying, despite an overall drying effect for the column. The SCMs capture some essence of this moistening at upper levels. A gradual transition from shallow to deep convection is simulated by the CRMs and the wetter SCMs during the transition periods, but the onset of deep convection is delayed in the drier SCMs. This results in lower precipitation rates for these SCMs during the active periods, although much better agreement exists between the models at this time.
Resumo:
The MarQUEST (Marine Biogeochemistry and Ecosystem Modelling Initiative in QUEST) project was established to develop improved descriptions of marine biogeochemistry, suited for the next generation of Earth system models. We review progress in these areas providing insight on the advances that have been made as well as identifying remaining key outstanding gaps for the development of the marine component of next generation Earth system models. The following issues are discussed and where appropriate results are presented; the choice of model structure, scaling processes from physiology to functional types, the ecosystem model sensitivity to changes in the physical environment, the role of the coastal ocean and new methods for the evaluation and comparison of ecosystem and biogeochemistry models. We make recommendations as to where future investment in marine ecosystem modelling should be focused, highlighting a generic software framework for model development, improved hydrodynamic models, and better parameterisation of new and existing models, reanalysis tools and ensemble simulations. The final challenge is to ensure that experimental/observational scientists are stakeholders in the models and vice versa.
Resumo:
Projections of stratospheric ozone from a suite of chemistry-climate models (CCMs) have been analyzed. In addition to a reference simulation where anthropogenic halogenated ozone depleting substances (ODSs) and greenhouse gases (GHGs) vary with time, sensitivity simulations with either ODS or GHG concentrations fixed at 1960 levels were performed to disaggregate the drivers of projected ozone changes. These simulations were also used to assess the two distinct milestones of ozone returning to historical values (ozone return dates) and ozone no longer being influenced by ODSs (full ozone recovery). The date of ozone returning to historical values does not indicate complete recovery from ODSs in most cases, because GHG-induced changes accelerate or decelerate ozone changes in many regions. In the upper stratosphere where CO2-induced stratospheric cooling increases ozone, full ozone recovery is projected to not likely have occurred by 2100 even though ozone returns to its 1980 or even 1960 levels well before (~2025 and 2040, respectively). In contrast, in the tropical lower stratosphere ozone decreases continuously from 1960 to 2100 due to projected increases in tropical upwelling, while by around 2040 it is already very likely that full recovery from the effects of ODSs has occurred, although ODS concentrations are still elevated by this date. In the midlatitude lower stratosphere the evolution differs from that in the tropics, and rather than a steady decrease in ozone, first a decrease in ozone is simulated from 1960 to 2000, which is then followed by a steady increase through the 21st century. Ozone in the midlatitude lower stratosphere returns to 1980 levels by ~2045 in the Northern Hemisphere (NH) and by ~2055 in the Southern Hemisphere (SH), and full ozone recovery is likely reached by 2100 in both hemispheres. Overall, in all regions except the tropical lower stratosphere, full ozone recovery from ODSs occurs significantly later than the return of total column ozone to its 1980 level. The latest return of total column ozone is projected to occur over Antarctica (~2045–2060) whereas it is not likely that full ozone recovery is reached by the end of the 21st century in this region. Arctic total column ozone is projected to return to 1980 levels well before polar stratospheric halogen loading does so (~2025–2030 for total column ozone, cf. 2050–2070 for Cly+60×Bry) and it is likely that full recovery of total column ozone from the effects of ODSs has occurred by ~2035. In contrast to the Antarctic, by 2100 Arctic total column ozone is projected to be above 1960 levels, but not in the fixed GHG simulation, indicating that climate change plays a significant role.
Resumo:
Amphicoma ( Glaphyridae) beetles are important pollinators of red bowl-shaped flowers in the Mediterranean. The role of color and shape in flower choice is well studied but the roles of inclination, depth, and height have seldom been investigated. Under field conditions, models were used to experimentally manipulate these three characters and visitation rates of beetles were recorded. Models with red horizontal surfaces were visited significantly more often than models with red vertical surfaces. Shallow flower models were visited significantly more than deeper equivalents. Models below or at the height of natural flower populations elicited significantly more landings than models above the height of flowers. Inclination, depth, and height characteristics are all likely to be important components in the flower preferences exhibited by pollinating beetles.
Resumo:
1. Demographic models are assuming an important role in management decisions for endangered species. Elasticity analysis and scope for management analysis are two such applications. Elasticity analysis determines the vital rates that have the greatest impact on population growth. Scope for management analysis examines the effects that feasible management might have on vital rates and population growth. Both methods target management in an attempt to maximize population growth. 2. The Seychelles magpie robin Copsychus sechellarum is a critically endangered island endemic, the population of which underwent significant growth in the early 1990s following the implementation of a recovery programme. We examined how the formal use of elasticity and scope for management analyses might have shaped management in the recovery programme, and assessed their effectiveness by comparison with the actual population growth achieved. 3. The magpie robin population doubled from about 25 birds in 1990 to more than 50 by 1995. A simple two-stage demographic model showed that this growth was driven primarily by a significant increase in the annual survival probability of first-year birds and an increase in the birth rate. Neither the annual survival probability of adults nor the probability of a female breeding at age 1 changed significantly over time. 4. Elasticity analysis showed that the annual survival probability of adults had the greatest impact on population growth. There was some scope to use management to increase survival, but because survival rates were already high (> 0.9) this had a negligible effect on population growth. Scope for management analysis showed that significant population growth could have been achieved by targeting management measures at the birth rate and survival probability of first-year birds, although predicted growth rates were lower than those achieved by the recovery programme when all management measures were in place (i.e. 1992-95). 5. Synthesis and applications. We argue that scope for management analysis can provide a useful basis for management but will inevitably be limited to some extent by a lack of data, as our study shows. This means that identifying perceived ecological problems and designing management to alleviate them must be an important component of endangered species management. The corollary of this is that it will not be possible or wise to consider only management options for which there is a demonstrable ecological benefit. Given these constraints, we see little role for elasticity analysis because, when data are available, a scope for management analysis will always be of greater practical value and, when data are lacking, precautionary management demands that as many perceived ecological problems as possible are tackled.