38 resultados para find it fast
Resumo:
Galactic cosmic rays (GCRs) are modulated by the heliospheric magnetic field (HMF) both over decadal time scales (due to long-term, global HMF variations), and over time scales of a few hours (associated with solar wind structures such as coronal mass ejections or the heliospheric current sheet, HCS). Due to the close association between the HCS, the streamer belt, and the band of slow solar wind, HCS crossings are often associated with corotating interaction regions where fast solar wind catches up and compresses slow solar wind ahead of it. However, not all HCS crossings are associated with strong compressions. In this study we categorize HCS crossings in two ways: Firstly, using the change in magnetic polarity, as either away-to-toward (AT) or toward-to-away (TA) magnetic field directions relative to the Sun and, secondly, using the strength of the associated solar wind compression, determined from the observed plasma density enhancement. For each category, we use superposed epoch analyses to show differences in both solar wind parameters and GCR flux inferred from neutron monitors. For strong-compression HCS crossings, we observe a peak in neutron counts preceding the HCS crossing, followed by a large drop after the crossing, attributable to the so-called ‘snow-plough’ effect. For weak-compression HCS crossings, where magnetic field polarity effects are more readily observable, we instead observe that the neutron counts have a tendency to peak in the away magnetic field sector. By splitting the data by the dominant polarity at each solar polar region, we find that the increase in GCR flux prior to the HCS crossing is primarily from strong compressions in cycles with negative north polar fields due to GCR drift effects. Finally, we report on unexpected differences in GCR behavior between TA weak compressions during opposing polarity cycles.
Resumo:
A fast simple climate modelling approach is developed for predicting and helping to understand general circulation model (GCM) simulations. We show that the simple model reproduces the GCM results accurately, for global mean surface air temperature change and global-mean heat uptake projections from 9 GCMs in the fifth coupled model inter-comparison project (CMIP5). This implies that understanding gained from idealised CO2 step experiments is applicable to policy-relevant scenario projections. Our approach is conceptually simple. It works by using the climate response to a CO2 step change taken directly from a GCM experiment. With radiative forcing from non-CO2 constituents obtained by adapting the Forster and Taylor method, we use our method to estimate results for CMIP5 representative concentration pathway (RCP) experiments for cases not run by the GCMs. We estimate differences between pairs of RCPs rather than RCP anomalies relative to the pre-industrial state. This gives better results because it makes greater use of available GCM projections. The GCMs exhibit differences in radiative forcing, which we incorporate in the simple model. We analyse the thus-completed ensemble of RCP projections. The ensemble mean changes between 1986–2005 and 2080–2099 for global temperature (heat uptake) are, for RCP8.5: 3.8 K (2.3 × 1024 J); for RCP6.0: 2.3 K (1.6 × 1024 J); for RCP4.5: 2.0 K (1.6 × 1024 J); for RCP2.6: 1.1 K (1.3 × 1024 J). The relative spread (standard deviation/ensemble mean) for these scenarios is around 0.2 and 0.15 for temperature and heat uptake respectively. We quantify the relative effect of mitigation action, through reduced emissions, via the time-dependent ratios (change in RCPx)/(change in RCP8.5), using changes with respect to pre-industrial conditions. We find that the effects of mitigation on global-mean temperature change and heat uptake are very similar across these different GCMs.
Resumo:
In the resource-based view, organisations are represented by the sum of their physical, human and organisational assets, resources and capabilities. Operational capabilities maintain the status quo and allow an organisation to execute their existing business. Dynamic capabilities, otherwise, allow an organisation to change this status quo including a change of the operational ones. Competitive advantage, in this context, is an effect of continuously developing and reconfiguring these firm-specific assets through dynamic capabilities. Deciding where and how to source the core operational capabilities is a key success factor. Furthermore, developing its dynamic capabilities allows an organisation to effectively manage change its operational capabilities. Many organisations are asserted to have a high dependency on - as well as a high benefit from - the use of information technology (IT), making it a crucial and overarching resource. Furthermore, the IT function is assigned the role as a change enabler and so IT sourcing affects the capability of managing business change. IT sourcing means that organisations need to decide how to source their IT capabilities. Outsourcing of parts of the IT function will also outsource some of the IT capabilities and therefore some of the business capabilities. As a result, IT sourcing has an impact on the organisation's capabilities and consequently on the business success. And finally, a turbulent and fast moving business environment challenges organisations to effectively and efficiently managing business change. Our research builds on the existing theory of dynamic and operational capabilities by considering the interdependencies between the dynamic capabilities of business change and IT sourcing. Further it examines the decision-making oversight of these areas as implemented through IT governance. We introduce a new conceptual framework derived from the existing theory and extended through an illustrative case study conducted in a German bank. Under a philosophical paradigm of constructivism, we collected data from eight semi-structured interviews and used additional sources of evidence in form of annual accounts, strategy papers and IT benchmark reports. We applied an Interpretative Phenomenological Analysis (IPA), which emerged the superordinate themes for our tentative capabilities framework. An understanding of these interdependencies enables scholars and professionals to improve business success through effectively managing business change and evaluating the impact of IT sourcing decisions on the organisation's operational and dynamic capabilities.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
A discrete-time random process is described, which can generate bursty sequences of events. A Bernoulli process, where the probability of an event occurring at time t is given by a fixed probability x, is modified to include a memory effect where the event probability is increased proportionally to the number of events that occurred within a given amount of time preceding t. For small values of x the interevent time distribution follows a power law with exponent −2−x. We consider a dynamic network where each node forms, and breaks connections according to this process. The value of x for each node depends on the fitness distribution, \rho(x), from which it is drawn; we find exact solutions for the expectation of the degree distribution for a variety of possible fitness distributions, and for both cases where the memory effect either is, or is not present. This work can potentially lead to methods to uncover hidden fitness distributions from fast changing, temporal network data, such as online social communications and fMRI scans.
Resumo:
This research investigates the link between rivalry and unethical behavior. We propose that people will engage in greater unethical behavior when competing against their rivals than when competing against non-rival competitors. Across a series of experiments and an archival study, we find that rivalry is associated with increased use of deception, unsportsmanlike behavior, willingness to employ unethical negotiation tactics, and misreporting of performance. We also explore the psychological underpinnings of rivalry, which help to illuminate how it differs from general competition, and why it increases unethical behavior. Rivalry as compared to non-rival competition was associated with increased status concerns, contingency of self-worth, and performance goals; mediation analyses revealed that performance goals played the biggest role in explaining why rivalry promoted greater unethicality. Lastly, we find that merely thinking about a rival can be enough to promote greater unethical behavior, even in domains unrelated to the rivalry. These findings highlight the importance of rivalry as a widespread, powerful, yet largely unstudied phenomenon with significant organizational implications. Further, the results help to inform when and why unethical behavior occurs within organizations, and demonstrate that the effects of competition are dependent upon relationships and prior interactions.
Resumo:
Precipitation is expected to respond differently to various drivers of anthropogenic climate change. We present the first results from the Precipitation Driver and Response Model Intercomparison Project (PDRMIP), where nine global climate models have perturbed CO2, CH4, black carbon, sulfate, and solar insolation. We divide the resulting changes to global mean and regional precipitation into fast responses that scale with changes in atmospheric absorption and slow responses scaling with surface temperature change. While the overall features are broadly similar between models, we find significant regional intermodel variability, especially over land. Black carbon stands out as a component that may cause significant model diversity in predicted precipitation change. Processes linked to atmospheric absorption are less consistently modeled than those linked to top-of-atmosphere radiative forcing. We identify a number of land regions where the model ensemble consistently predicts that fast precipitation responses to climate perturbations dominate over the slow, temperature-driven responses.
Resumo:
Haptic devices tend to be kept small as it is easier to achieve a large change of stiffness with a low associated apparent mass. If large movements are required there is a usually a reduction in the quality of the haptic sensations which can be displayed. The typical measure of haptic device performance is impedance-width (z-width) but this does not account for actuator saturation, usable workspace or the ability to do rapid movements. This paper presents the analysis and evaluation of a haptic device design, utilizing a variant of redundant kinematics, sometimes referred to as a macro-micro configuration, intended to allow large and fast movements without loss of impedance-width. A brief mathematical analysis of the design constraints is given and a prototype system is described where the effects of different elements of the control scheme can be examined to better understand the potential benefits and trade-offs in the design. Finally, the performance of the system is evaluated using a Fitts’ Law test and found to compare favourably with similar evaluations of smaller workspace devices.