214 resultados para RM extended algorithm
Resumo:
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with two examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of one year, a UK Twitter mentions network, constructed from the direct \emph{tweets} between 12.4k individuals during one week, and a subset the Enron email data set.
Resumo:
BIOME 6000 is an international project to map vegetation globally at mid-Holocene (6000 14C yr bp) and last glacial maximum (LGM, 18,000 14C yr bp), with a view to evaluating coupled climate-biosphere model results. Primary palaeoecological data are assigned to biomes using an explicit algorithm based on plant functional types. This paper introduces the second Special Feature on BIOME 6000. Site-based global biome maps are shown with data from North America, Eurasia (except South and Southeast Asia) and Africa at both time periods. A map based on surface samples shows the method’s skill in reconstructing present-day biomes. Cold and dry conditions at LGM favoured extensive tundra and steppe. These biomes intergraded in northern Eurasia. Northern hemisphere forest biomes were displaced southward. Boreal evergreen forests (taiga) and temperate deciduous forests were fragmented, while European and East Asian steppes were greatly extended. Tropical moist forests (i.e. tropical rain forest and tropical seasonal forest) in Africa were reduced. In south-western North America, desert and steppe were replaced by open conifer woodland, opposite to the general arid trend but consistent with modelled southward displacement of the jet stream. The Arctic forest limit was shifted slighly north at 6000 14C yr bp in some sectors, but not in all. Northern temperate forest zones were generally shifted greater distances north. Warmer winters as well as summers in several regions are required to explain these shifts. Temperate deciduous forests in Europe were greatly extended, into the Mediterranean region as well as to the north. Steppe encroached on forest biomes in interior North America, but not in central Asia. Enhanced monsoons extended forest biomes in China inland and Sahelian vegetation into the Sahara while the African tropical rain forest was also reduced, consistent with a modelled northward shift of the ITCZ and a more seasonal climate in the equatorial zone. Palaeobiome maps show the outcome of separate, independent migrations of plant taxa in response to climate change. The average composition of biomes at LGM was often markedly different from today. Refugia for the temperate deciduous and tropical rain forest biomes may have existed offshore at LGM, but their characteristic taxa also persisted as components of other biomes. Examples include temperate deciduous trees that survived in cool mixed forest in eastern Europe, and tropical evergreen trees that survived in tropical seasonal forest in Africa. The sequence of biome shifts during a glacial-interglacial cycle may help account for some disjunct distributions of plant taxa. For example, the now-arid Saharan mountains may have linked Mediterranean and African tropical montane floras during enhanced monsoon regimes. Major changes in physical land-surface conditions, shown by the palaeobiome data, have implications for the global climate. The data can be used directly to evaluate the output of coupled atmosphere-biosphere models. The data could also be objectively generalized to yield realistic gridded land-surface maps, for use in sensitivity experiments with atmospheric models. Recent analyses of vegetation-climate feedbacks have focused on the hypothesized positive feedback effects of climate-induced vegetation changes in the Sahara/Sahel region and the Arctic during the mid-Holocene. However, a far wider spectrum of interactions potentially exists and could be investigated, using these data, both for 6000 14C yr bp and for the LGM.
Resumo:
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.
Resumo:
Reinforcing the Low Voltage (LV) distribution network will become essential to ensure it remains within its operating constraints as demand on the network increases. The deployment of energy storage in the distribution network provides an alternative to conventional reinforcement. This paper presents a control methodology for energy storage to reduce peak demand in a distribution network based on day-ahead demand forecasts and historical demand data. The control methodology pre-processes the forecast data prior to a planning phase to build in resilience to the inevitable errors between the forecasted and actual demand. The algorithm uses no real time adjustment so has an economical advantage over traditional storage control algorithms. Results show that peak demand on a single phase of a feeder can be reduced even when there are differences between the forecasted and the actual demand. In particular, results are presented that demonstrate when the algorithm is applied to a large number of single phase demand aggregations that it is possible to identify which of these aggregations are the most suitable candidates for the control methodology.
Resumo:
Facility management (FM), from a service oriented approach, addresses the functions and requirements of different services such as energy management, space planning and security service. Different service requires different information to meet the needs arising from the service. Object-based Building Information Modelling (BIM) is limited to support FM services; though this technology is able to generate 3D models that semantically represent facility’s information dynamically over the lifecycle of a building. This paper presents a semiotics-inspired framework to extend BIM from a service-oriented perspective. The extended BIM, which specifies FM services and required information, will be able to express building service information in the right format for the right purposes. The service oriented approach concerns pragmatic aspect of building’s information beyond semantic level. The pragmatics defines and provides context for utilisation of building’s information. Semiotics theory adopted in this paper is to address pragmatic issues of utilisation of BIM for FM services.
Resumo:
Extended cusp-like regions (ECRs) are surveyed, as observed by the Magnetospheric Ion Composition Sensor (MICS) of the Charge and Mass Magnetospheric Ion Composition Experiment (CAMMICE) instrument aboard Polar between 1996 and 1999. The first of these ECR events was observed on 29 May 1996, an event widely discussed in the literature and initially thought to be caused by tail lobe reconnection due to the coinciding prolonged interval of strong northward IMF. ECRs are characterized here by intense fluxes of magnetosheath-like ions in the energy-per-charge range of _1 to 10 keV e_1. We investigate the concurrence of ECRs with intervals of prolonged (lasting longer than 1 and 3 hours) orientations of the IMF vector and high solar wind dynamic pressure (PSW). Also investigated is the opposite concurrence, i.e., of the IMF and high PSW with ECRs. (Note that these surveys are asking distinctly different questions.) The former survey indicates that ECRs have no overall preference for any orientation of the IMF. However, the latter survey reveals that during northward IMF, particularly when accompanied by high PSW, ECRs are more likely. We also test for orbital and seasonal effects revealing that Polar has to be in a particular region to observe ECRs and that they occur more frequently around late spring. These results indicate that ECRs have three distinct causes and so can relate to extended intervals in (1) the cusp on open field lines, (2) the magnetosheath, and (3) the magnetopause indentation at the cusp, with the latter allowing magnetosheath plasma to approach close to the Earth without entering the magnetosphere.
Resumo:
The equations of Milsom are evaluated, giving the ground range and group delay of radio waves propagated via the horizontally stratified model ionosphere proposed by Bradley and Dudeney. Expressions for the ground range which allow for the effects of the underlying E- and F1-regions are used to evaluate the basic maximum usable frequency or M-factors for single F-layer hops. An algorithm for the rapid calculation of the M-factor at a given range is developed, and shown to be accurate to within 5%. The results reveal that the M(3000)F2-factor scaled from vertical-incidence ionograms using the standard URSI procedure can be up to 7.5% in error. A simple addition to the algorithm effects a correction to ionogram values to make these accurate to 0.5%.
Resumo:
This paper describes a fast integer sorting algorithm, herein referred as Bit-index sort, which is a non-comparison sorting algorithm for partial per-mutations, with linear complexity order in execution time. Bit-index sort uses a bit-array to classify input sequences of distinct integers, and exploits built-in bit functions in C compilers supported by machine hardware to retrieve the ordered output sequence. Results show that Bit-index sort outperforms in execution time to quicksort and counting sort algorithms. A parallel approach for Bit-index sort using two simultaneous threads is included, which obtains speedups up to 1.6.
Resumo:
This article is concerned with the liability of search engines for algorithmically produced search suggestions, such as through Google’s ‘autocomplete’ function. Liability in this context may arise when automatically generated associations have an offensive or defamatory meaning, or may even induce infringement of intellectual property rights. The increasing number of cases that have been brought before courts all over the world puts forward questions on the conflict of fundamental freedoms of speech and access to information on the one hand, and personality rights of individuals— under a broader right of informational self-determination—on the other. In the light of the recent judgment of the Court of Justice of the European Union (EU) in Google Spain v AEPD, this article concludes that many requests for removal of suggestions including private individuals’ information will be successful on the basis of EU data protection law, even absent prejudice to the person concerned.
Resumo:
Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
Resumo:
This study has explored the prediction errors of tropical cyclones (TCs) in the European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) for the Northern Hemisphere summer period for five recent years. Results for the EPS are contrasted with those for the higher-resolution deterministic forecasts. Various metrics of location and intensity errors are considered and contrasted for verification based on IBTrACS and the numerical weather prediction (NWP) analysis (NWPa). Motivated by the aim of exploring extended TC life cycles, location and intensity measures are introduced based on lower-tropospheric vorticity, which is contrasted with traditional verification metrics. Results show that location errors are almost identical when verified against IBTrACS or the NWPa. However, intensity in the form of the mean sea level pressure (MSLP) minima and 10-m wind speed maxima is significantly underpredicted relative to IBTrACS. Using the NWPa for verification results in much better consistency between the different intensity error metrics and indicates that the lower-tropospheric vorticity provides a good indication of vortex strength, with error results showing similar relationships to those based on MSLP and 10-m wind speeds for the different forecast types. The interannual variation in forecast errors are discussed in relation to changes in the forecast and NWPa system and variations in forecast errors between different ocean basins are discussed in terms of the propagation characteristics of the TCs.
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
This work investigates the problem of feature selection in neuroimaging features from structural MRI brain images for the classification of subjects as healthy controls, suffering from Mild Cognitive Impairment or Alzheimer’s Disease. A Genetic Algorithm wrapper method for feature selection is adopted in conjunction with a Support Vector Machine classifier. In very large feature sets, feature selection is found to be redundant as the accuracy is often worsened when compared to an Support Vector Machine with no feature selection. However, when just the hippocampal subfields are used, feature selection shows a significant improvement of the classification accuracy. Three-class Support Vector Machines and two-class Support Vector Machines combined with weighted voting are also compared with the former and found more useful. The highest accuracy achieved at classifying the test data was 65.5% using a genetic algorithm for feature selection with a three-class Support Vector Machine classifier.