912 resultados para Numerical Algorithms and Problems
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
In the concluding paper of this tetralogy, we here use the different geomagnetic activity indices to reconstruct the near-Earth interplanetary magnetic field (IMF) and solar wind flow speed, as well as the open solar flux (OSF) from 1845 to the present day. The differences in how the various indices vary with near-Earth interplanetary parameters, which are here exploited to separate the effects of the IMF and solar wind speed, are shown to be statistically significant at the 93% level or above. Reconstructions are made using four combinations of different indices, compiled using different data and different algorithms, and the results are almost identical for all parameters. The correction to the aa index required is discussed by comparison with the Ap index from a more extensive network of mid-latitude stations. Data from the Helsinki magnetometer station is used to extend the aa index back to 1845 and the results confirmed by comparison with the nearby St Petersburg observatory. The optimum variations, using all available long-term geomagnetic indices, of the near-Earth IMF and solar wind speed, and of the open solar flux, are presented; all with ±2sigma� uncertainties computed using the Monte Carlo technique outlined in the earlier papers. The open solar flux variation derived is shown to be very similar indeed to that obtained using the method of Lockwood et al. (1999).
Resumo:
In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.
Resumo:
Satellite data are increasingly used to provide observation-based estimates of the effects of aerosols on climate. The Aerosol-cci project, part of the European Space Agency's Climate Change Initiative (CCI), was designed to provide essential climate variables for aerosols from satellite data. Eight algorithms, developed for the retrieval of aerosol properties using data from AATSR (4), MERIS (3) and POLDER, were evaluated to determine their suitability for climate studies. The primary result from each of these algorithms is the aerosol optical depth (AOD) at several wavelengths, together with the Ångström exponent (AE) which describes the spectral variation of the AOD for a given wavelength pair. Other aerosol parameters which are possibly retrieved from satellite observations are not considered in this paper. The AOD and AE (AE only for Level 2) were evaluated against independent collocated observations from the ground-based AERONET sun photometer network and against “reference” satellite data provided by MODIS and MISR. Tools used for the evaluation were developed for daily products as produced by the retrieval with a spatial resolution of 10 × 10 km2 (Level 2) and daily or monthly aggregates (Level 3). These tools include statistics for L2 and L3 products compared with AERONET, as well as scoring based on spatial and temporal correlations. In this paper we describe their use in a round robin (RR) evaluation of four months of data, one month for each season in 2008. The amount of data was restricted to only four months because of the large effort made to improve the algorithms, and to evaluate the improvement and current status, before larger data sets will be processed. Evaluation criteria are discussed. Results presented show the current status of the European aerosol algorithms in comparison to both AERONET and MODIS and MISR data. The comparison leads to a preliminary conclusion that the scores are similar, including those for the references, but the coverage of AATSR needs to be enhanced and further improvements are possible for most algorithms. None of the algorithms, including the references, outperforms all others everywhere. AATSR data can be used for the retrieval of AOD and AE over land and ocean. PARASOL and one of the MERIS algorithms have been evaluated over ocean only and both algorithms provide good results.
Resumo:
The DIAMET (DIAbatic influences on Mesoscale structures in ExTratropical storms) project aims to improve forecasts of high-impact weather in extratropical cyclones through field measurements, high-resolution numerical modeling, and improved design of ensemble forecasting and data assimilation systems. This article introduces DIAMET and presents some of the first results. Four field campaigns were conducted by the project, one of which, in late 2011, coincided with an exceptionally stormy period marked by an unusually strong, zonal North Atlantic jet stream and a succession of severe windstorms in northwest Europe. As a result, December 2011 had the highest monthly North Atlantic Oscillation index (2.52) of any December in the last 60 years. Detailed observations of several of these storms were gathered using the UK’s BAe146 research aircraft and extensive ground-based measurements. As an example of the results obtained during the campaign, observations are presented of cyclone Friedhelm on 8 December 2011, when surface winds with gusts exceeding 30 m s-1 crossed central Scotland, leading to widespread disruption to transportation and electricity supply. Friedhelm deepened 44 hPa in 24 hours and developed a pronounced bent-back front wrapping around the storm center. The strongest winds at 850 hPa and the surface occurred in the southern quadrant of the storm, and detailed measurements showed these to be most intense in clear air between bands of showers. High-resolution ensemble forecasts from the Met Office showed similar features, with the strongest winds aligned in linear swaths between the bands, suggesting that there is potential for improved skill in forecasts of damaging winds.
Resumo:
We compare five general circulation models (GCMs) which have been recently used to study hot extrasolar planet atmospheres (BOB, CAM, IGCM, MITgcm, and PEQMOD), under three test cases useful for assessing model convergence and accuracy. Such a broad, detailed intercomparison has not been performed thus far for extrasolar planets study. The models considered all solve the traditional primitive equations, but employ di↵erent numerical algorithms or grids (e.g., pseudospectral and finite volume, with the latter separately in longitude-latitude and ‘cubed-sphere’ grids). The test cases are chosen to cleanly address specific aspects of the behaviors typically reported in hot extrasolar planet simulations: 1) steady-state, 2) nonlinearly evolving baroclinic wave, and 3) response to fast timescale thermal relaxation. When initialized with a steady jet, all models maintain the steadiness, as they should—except MITgcm in cubed-sphere grid. A very good agreement is obtained for a baroclinic wave evolving from an initial instability in pseudospectral models (only). However, exact numerical convergence is still not achieved across the pseudospectral models: amplitudes and phases are observably di↵erent. When subject to a typical ‘hot-Jupiter’-like forcing, all five models show quantitatively di↵erent behavior—although qualitatively similar, time-variable, quadrupole-dominated flows are produced. Hence, as have been advocated in several past studies, specific quantitative predictions (such as the location of large vortices and hot regions) by GCMs should be viewed with caution. Overall, in the tests considered here, pseudospectral models in pressure coordinate (PEBOB and PEQMOD) perform the best and MITgcm in cubed-sphere grid performs the worst.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
The implications are discussed of acceleration of magnetospheric ions by reflection off two magnetopause Alfvén waves, launched by the reconnection site into the inflow regions on both sides of the boundary. The effects of these waves on the ion populations, predicted using the model described by Lockwood et al. [1996], offer a physical interpretation of all the various widely used classifications of precipitation into the dayside ionosphere, namely, central plasma sheet, dayside boundary plasma sheet (BPS), void, low-latitude boundary layer (LLBL), cusp, mantle, and polar cap. The location of the open-closed boundary and the form of the convection flow pattern are discussed in relation to the regions in which these various precipitations are typically found. Specifically, the model predicts that both the LLBL and the dayside BPS precipitations are on newly opened field lines and places the convection reversal within the LLBL, as is often observed. It is shown that this offers solutions to a number of paradoxes and problems that arise if the LLBL and BPS precipitations are thought of as being on closed field lines. This model is also used to make quantitive predictions of the longitudinal extent and latitudinal width of the cusp, as a function of solar wind density.
Resumo:
During substorms, magnetic energy is stored and released by the geomagnetic tail in cycles of growth and expansion phases, respectively. Hence substorms are inherently non-steady phenomena. On the other hand, all numerical models (and most conceptual ones) of ionospheric convection produced to date have considered only steady-state situations. In this paper, we investigate the relationship of substorms to convection. In particular, it is shown that the steady-state convection pattern represents an average over several substorm cycles and does not apply on time scales shorter than the substorm cycle period of 1-2 hours. The flows driven by the growth and expansion phases of substorms are integral (indeed dominant) part of, as opposed to a transient addition to, the overall convection pattern.
Resumo:
The quantification of uncertainty is an increasingly popular topic, with clear importance for climate change policy. However, uncertainty assessments are open to a range of interpretations, each of which may lead to a different policy recommendation. In the EQUIP project researchers from the UK climate modelling, statistical modelling, and impacts communities worked together on ‘end-to-end’ uncertainty assessments of climate change and its impacts. Here, we use an experiment in peer review amongst project members to assess variation in the assessment of uncertainties between EQUIP researchers. We find overall agreement on key sources of uncertainty but a large variation in the assessment of the methods used for uncertainty assessment. Results show that communication aimed at specialists makes the methods used harder to assess. There is also evidence of individual bias, which is partially attributable to disciplinary backgrounds. However, varying views on the methods used to quantify uncertainty did not preclude consensus on the consequential results produced using those methods. Based on our analysis, we make recommendations for developing and presenting statements on climate and its impacts. These include the use of a common uncertainty reporting format in order to make assumptions clear; presentation of results in terms of processes and trade-offs rather than only numerical ranges; and reporting multiple assessments of uncertainty in order to elucidate a more complete picture of impacts and their uncertainties. This in turn implies research should be done by teams of people with a range of backgrounds and time for interaction and discussion, with fewer but more comprehensive outputs in which the range of opinions is recorded.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
Current approaches for the reduction of carbon emissions in buildings are often predicated on the integration of renewable technologies into building projects. Building integrated photovoltaics (BIPV) is one of these technologies and brings its own set of challenges and problems with a resulting mutual articulation of this technology and the building. A Social Construction of Technology (SCOT) approach explores how negotiations between informal groups of project actors with shared interests shape the ongoing specification of both BIPV and the building. Six main groups with different interests were found to be involved in the introduction of BIPV (Cost Watchers, Design Aesthetes, Green Guardians, Design Optimizers, Generation Maximizers and Users). Their involvement around three sets of issues (design changes from lack of familiarity with the technology, misunderstandings from unfamiliar interdependencies of trades and the effects of standard firm procedure) is followed. Findings underline how BIPV requires a level of integration that typically spans different work packages and how standard contractual structures inhibit the smooth incorporation of BIPV. Successful implementation is marked by ongoing (re-)design of both the building and the technology as informal fluid groups of project actors with shared interests address the succession of problems which arise in the process of implementation.
Resumo:
This chapter discusses how international assignment was used as tool to expand knowledge within the organisation using the example of the Kingdom of Saudi Arabia, we focus in particularly on the case of repatriation and problems with subsequent staff turnover, among repatriates in Saudi Arabia’s private sector. Before doing so, the chapter provides a background to the Saudi labour market and the impact of Saudization policies that aimed to reduce relying on foreign labour. Following this, the chapter discusses the Saudi government attempt create a national knowledgeable labour force through international assignment. Finally, using the example of an organisation in Saudi Arabia, this chapter illustrates the possible role of Wasta - a prevalent form of nepotism that permeates organizational life in Saudi Arabia - in repatriates managers turnover intention. Our focus is on unravelling the impact of Wasta on HRM practices with a particular focus on the management of the repatriation process of Saudi employees upon their completion of international assignments.
Resumo:
The impact of the Tibetan Plateau uplift on the Asian monsoons and inland arid climates is an important but also controversial question in studies of paleoenvironmental change during the Cenozoic. In order to achieve a good understanding of the background for the formation of the Asian monsoons and arid environments, it is necessary to know the characteristics of the distribution of monsoon regions and arid zones in Asia before the plateau uplift. In this study, we discuss in detail the patterns of distribution of the Asian monsoon and arid regions before the plateau uplift on the basis of modeling results without topography from a global coupled atmosphere–ocean general circulation model, compare our results with previous simulation studies and available biogeological data, and review the uncertainties in the current knowledge. Based on what we know at the moment, tropical monsoon climates existed south of 20°N in South and Southeast Asia before the plateau uplift, while the East Asian monsoon was entirely absent in the extratropics. These tropical monsoons mainly resulted from the seasonal shifts of the Inter-Tropical Convergence Zone. There may have been a quasi-monsoon region in central-southern Siberia. Most of the arid regions in the Asian continent were limited to the latitudes of 20–40°N, corresponding to the range of the subtropical high pressure year-around. In the meantime, the present-day arid regions located in the relatively high latitudes in Central Asia were most likely absent before the plateau uplift. The main results from the above modeling analyses are qualitatively consistent with the available biogeological data. These results highlight the importance of the uplift of the Tibetan Plateau in the Cenozoic evolution of the Asian climate pattern of dry–wet conditions. Future studies should be focused on effects of the changes in land–sea distribution and atmospheric CO2 concentrations before and after the plateau uplift, and also on cross-comparisons between numerical simulations and geological evidence, so that a comprehensive understanding of the evolution of the Cenozoic paleoenvironments in Asia can be achieved.
Resumo:
Our numerical simulations show that the reconnection of magnetic field becomes fast in the presence of weak turbulence in the way consistent with the Lazarian and Vishniac (1999) model of fast reconnection. We trace particles within our numerical simulations and show that the particles can be efficiently accelerated via the first order Fermi acceleration. We discuss the acceleration arising from reconnection as a possible origin of the anomalous cosmic rays measured by Voyagers. (C) 2010 Elsevier Ltd. All rights reserved.