265 resultados para Multi-view geometry
Resumo:
The overall global-scale consequences of climate change are dependent on the distribution of impacts across regions, and there are multiple dimensions to these impacts.This paper presents a global assessment of the potential impacts of climate change across several sectors, using a harmonised set of impacts models forced by the same climate and socio-economic scenarios. Indicators of impact cover the water resources, river and coastal flooding, agriculture, natural environment and built environment sectors. Impacts are assessed under four SRES socio-economic and emissions scenarios, and the effects of uncertainty in the projected pattern of climate change are incorporated by constructing climate scenarios from 21 global climate models. There is considerable uncertainty in projected regional impacts across the climate model scenarios, and coherent assessments of impacts across sectors and regions therefore must be based on each model pattern separately; using ensemble means, for example, reduces variability between sectors and indicators. An example narrative assessment is presented in the paper. Under this narrative approximately 1 billion people would be exposed to increased water resources stress, around 450 million people exposed to increased river flooding, and 1.3 million extra people would be flooded in coastal floods each year. Crop productivity would fall in most regions, and residential energy demands would be reduced in most regions because reduced heating demands would offset higher cooling demands. Most of the global impacts on water stress and flooding would be in Asia, but the proportional impacts in the Middle East North Africa region would be larger. By 2050 there are emerging differences in impact between different emissions and socio-economic scenarios even though the changes in temperature and sea level are similar, and these differences are greater in 2080. However, for all the indicators, the range in projected impacts between different climate models is considerably greater than the range between emissions and socio-economic scenarios.
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
Resumo:
Radar reflectivity measurements from three different wavelengths are used to retrieve information about the shape of aggregate snowflakes in deep stratiform ice clouds. Dual-wavelength ratios are calculated for different shape models and compared to observations at 3, 35 and 94 GHz. It is demonstrated that many scattering models, including spherical and spheroidal models, do not adequately describe the aggregate snowflakes that are observed. The observations are consistent with fractal aggregate geometries generated by a physically-based aggregation model. It is demonstrated that the fractal dimension of large aggregates can be inferred directly from the radar data. Fractal dimensions close to 2 are retrieved, consistent with previous theoretical models and in-situ observations.
Resumo:
A coordinated ground-based observational campaign using the IMAGE magnetometer network, EISCAT radars and optical instruments on Svalbard has made possible detailed studies of a travelling convection vortices (TCV) event on 6 January 1992. Combining the data from these facilities allows us to draw a very detailed picture of the features and dynamics of this TCV event. On the way from the noon to the drawn meridian, the vortices went through a remarkable development. The propagation velocity in the ionosphere increased from 2.5 to 7.4 km s−1, and the orientation of the major axes of the vortices rotated from being almost parallel to the magnetic meridian near noon to essentially perpendicular at dawn. By combining electric fields obtained by EISCAT and ionospheric currents deduced from magnetic field recordings, conductivities associated with the vortices could be estimated. Contrary to expectations we found higher conductivities below the downward field aligned current (FAC) filament than below the upward directed. Unexpected results also emerged from the optical observations. For most of the time there were no discrete aurora at 557.7 nm associated with the TCVs. Only once did a discrete form appear at the foot of the upward FAC. This aurora subsequently expanded eastward and westward leaving its centre at the same longitude while the TCV continued to travel westward. Also we try to identify the source regions of TCVs in the magnetosphere and discuss possible generation mechanisms.
Resumo:
Data collected by ground magnetometers and high latitude radars during a small isolated substorm are discussed in terms of the global changes in convection during the substorm. This substorm was observed during the international GISMOS (Global Ionospheric Simultaneous Measurements of Substorms) Experiment of 1 – 5 June 1987 and the array of observations discussed here span the night sector from approximately dusk to dawn. The substorm, observed by the Sondrestrom radar and auroral and midlatitude magnetometers is associated with a polar cap contraction observed near dusk by the EISCAT radar.
Resumo:
The idea of buildings in harmony with nature can be traced back to ancient times. The increasing concerns on sustainability oriented buildings have added new challenges in building architectural design and called for new design responses. Sustainable design integrates and balances the human geometries and the natural ones. As the language of nature, it is, therefore, natural to assume that fractal geometry could play a role in developing new forms of aesthetics and sustainable architectural design. This paper gives a brief description of fractal geometry theory and presents its current status and recent developments through illustrative review of some fractal case studies in architecture design, which provides a bridge between fractal geometry and architecture design.
Resumo:
At Hollow Banks Quarry, Scorton, located just north of Catterick (N Yorks.), a highly unusual group of 15 late Roman burials was excavated between 1998 and 2000. The small cemetery consists of almost exclusively male burials, dated to the fourth century. An unusually large proportion of these individuals was buried with crossbow brooches and belt fittings, suggesting that they may have been serving in the late Roman army or administration and may have come to Scorton from the Continent. Multi-isotope analyses (carbon, nitrogen, oxygen and strontium) of nine sufficiently well-preserved individuals indicate that seven males, all equipped with crossbow brooches and/or belt fittings, were not local to the Catterick area and that at least six of them probably came from the European mainland. Dietary (carbon and nitrogen isotope) analysis only of a tenth individual also suggests a non-local origin. At Scorton it appears that the presence of crossbow brooches and belts in the grave was more important for suggesting non-British origins than whether or not they were worn. This paper argues that cultural and social factors played a crucial part in the creation of funerary identities and highlights the need for both multi-proxy analyses and the careful contextual study of artefacts.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
Resumo:
Awareness of emerging situations in a dynamic operational environment of a robotic assistive device is an essential capability of such a cognitive system, based on its effective and efficient assessment of the prevailing situation. This allows the system to interact with the environment in a sensible (semi)autonomous / pro-active manner without the need for frequent interventions from a supervisor. In this paper, we report a novel generic Situation Assessment Architecture for robotic systems directly assisting humans as developed in the CORBYS project. This paper presents the overall architecture for situation assessment and its application in proof-of-concept Demonstrators as developed and validated within the CORBYS project. These include a robotic human follower and a mobile gait rehabilitation robotic system. We present an overview of the structure and functionality of the Situation Assessment Architecture for robotic systems with results and observations as collected from initial validation on the two CORBYS Demonstrators.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model
Resumo:
Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.
Resumo:
Assessing the ways in which rural agrarian areas provide Cultural Ecosystem Services (CES) is proving difficult to achieve. This research has developed an innovative methodological approach named as Multi Scale Indicator Framework (MSIF) for capturing the CES embedded into the rural agrarian areas. This framework reconciles a literature review with a trans-disciplinary participatory workshop. Both of these sources reveal that societal preferences diverge upon judgemental criteria which in turn relate to different visual concepts that can be drawn from analysing attributes, elements, features and characteristics of rural areas. We contend that it is now possible to list a group of possible multi scale indicators for stewardship, diversity and aesthetics. These results might also be of use for improving any existing European indicators frameworks by also including CES. This research carries major implications for policy at different levels of governance, as it makes possible to target and monitor policy instruments to the physical rural settings so that cultural dimensions are adequately considered. There is still work to be developed on regional specific values and thresholds for each criteria and its indicator set. In practical terms, by developing the conceptual design within a common framework as described in this paper, a considerable step forward towards the inclusion of the cultural dimension in European wide assessments can be made.