12 resultados para Maximum Degree Proximity algorithm (MAX-DPA)
em CentAUR: Central Archive University of Reading - UK
Resumo:
The micellization of F127 (E98P67E98) in dilute aqueous solutions of polyethylene glycol (PEG6000 and PEG35000) and poly(vinylpyrrolidone) (PVP K30 and PVP K90) is studied. The average hydrodynamic radius (rh,app) obtained from the dynamic light scattering technique increased with increase in PEG concentration but decreased on addition of PVP, results which are consistent with interaction of the micelles with PEG and the formation of micelles clusters, but no such interaction occurs with PVP. Tube inversion was used to determine the onset of gelation. The critical concentration of F127 for gelation increased on addition of PEG and of PVP K30 but decreased on addition of PVP K90. Small-angle X-ray scattering (SAXS) was used to show that the 30 wt% F127 gel structure (fcc) was independent of polymer type and concentration, as was the d-spacing and so the micelle hard-sphere radius. The maximum elastic modulus (G0 max) of 30 wt% F127 decreased from its value for water alone as PEG was added, but was little changed by adding PVP. These results are consistent with the packed-micelles in the 30 wt% F127 gel being effectively isolated from the polymer solution on the microscale while, especially for the PEG, being mixed on the macroscale.
Resumo:
The equations of Milsom are evaluated, giving the ground range and group delay of radio waves propagated via the horizontally stratified model ionosphere proposed by Bradley and Dudeney. Expressions for the ground range which allow for the effects of the underlying E- and F1-regions are used to evaluate the basic maximum usable frequency or M-factors for single F-layer hops. An algorithm for the rapid calculation of the M-factor at a given range is developed, and shown to be accurate to within 5%. The results reveal that the M(3000)F2-factor scaled from vertical-incidence ionograms using the standard URSI procedure can be up to 7.5% in error. A simple addition to the algorithm effects a correction to ionogram values to make these accurate to 0.5%.
Resumo:
Samples of glacial till deposited since the Little Ice Age (LIA) maximum by two glaciers, North Bogbre at Svartisen and Corneliussen-breen at Okstindan, northern Norway, were obtained from transects running from the current glacier snout to the LIA (c. AD 1750) limit. The samples were analysed to determine their sediment magnetic properties, which display considerable variability. Significant trends in some magnetic parameters are evident with distance from the glacier margin and hence length of subaerial exposure. Magnetic susceptibility (X) decreases away from the contemporary snout, perhaps due to the weathering of ferrimagnetic minerals into antiferromagnetic forms, although this trend is generally not statistically significant. Trends in the ratios of soft IRM/hard IRM which are statistically significant support this hypothesis, suggesting that antiferromagnetic minerals are increasing relative to ferrimagnetic minerals towards the LIA maximum. Backfield ratios (IRM -100 mT/SIRM) also display a significant and strong trend towards magnetically harder behaviour with proximity to the LIA maximum. Thus, by employing a chronosequence approach, it may be possible to use sediment magnetics data as a tool for reconstructing glacier retreat in areas where more traditional techniques, such as lichenometry, are not applicable.
Resumo:
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Resumo:
Fully connected cubic networks (FCCNs) are a class of newly proposed hierarchical interconnection networks for multicomputer systems, which enjoy the strengths of constant node degree and good expandability. The shortest path routing in FCCNs is an open problem. In this paper, we present an oblivious routing algorithm for n-level FCCN with N = 8(n) nodes, and prove that this algorithm creates a shortest path from the source to the destination. At the costs of both an O(N)-parallel-step off-line preprocessing phase and a list of size N stored at each node, the proposed algorithm is carried out at each related node in O(n) time. In some cases the proposed algorithm is superior to the one proposed by Chang and Wang in terms of the length of the routing path. This justifies the utility of our routing strategy. (C) 2006 Elsevier Inc. All rights reserved.
Resumo:
The paper analyzes the performance of the unconstrained filtered-x LMS (FxLMS) algorithm for active noise control (ANC), where we remove the constraints on the controller that it must be causal and has finite impulse response. It is shown that the unconstrained FxLMS algorithm always converges to, if stable, the true optimum filter, even if the estimation of the secondary path is not perfect, and its final mean square error is independent of the secondary path. Moreover, we show that the sufficient and necessary stability condition for the feedforward unconstrained FxLMS is that the maximum phase error of the secondary path estimation must be within 90°, which is the only necessary condition for the feedback unconstrained FxLMS. The significance of the analysis on a practical system is also discussed. Finally we show how the obtained results can guide us to design a robust feedback ANC headset.
Resumo:
For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.
Resumo:
The collection of wind speed time series by means of digital data loggers occurs in many domains, including civil engineering, environmental sciences and wind turbine technology. Since averaging intervals are often significantly larger than typical system time scales, the information lost has to be recovered in order to reconstruct the true dynamics of the system. In the present work we present a simple algorithm capable of generating a real-time wind speed time series from data logger records containing the average, maximum, and minimum values of the wind speed in a fixed interval, as well as the standard deviation. The signal is generated from a generalized random Fourier series. The spectrum can be matched to any desired theoretical or measured frequency distribution. Extreme values are specified through a postprocessing step based on the concept of constrained simulation. Applications of the algorithm to 10-min wind speed records logged at a test site at 60 m height above the ground show that the recorded 10-min values can be reproduced by the simulated time series to a high degree of accuracy.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.
Resumo:
BIOME 6000 is an international project to map vegetation globally at mid-Holocene (6000 14C yr bp) and last glacial maximum (LGM, 18,000 14C yr bp), with a view to evaluating coupled climate-biosphere model results. Primary palaeoecological data are assigned to biomes using an explicit algorithm based on plant functional types. This paper introduces the second Special Feature on BIOME 6000. Site-based global biome maps are shown with data from North America, Eurasia (except South and Southeast Asia) and Africa at both time periods. A map based on surface samples shows the method’s skill in reconstructing present-day biomes. Cold and dry conditions at LGM favoured extensive tundra and steppe. These biomes intergraded in northern Eurasia. Northern hemisphere forest biomes were displaced southward. Boreal evergreen forests (taiga) and temperate deciduous forests were fragmented, while European and East Asian steppes were greatly extended. Tropical moist forests (i.e. tropical rain forest and tropical seasonal forest) in Africa were reduced. In south-western North America, desert and steppe were replaced by open conifer woodland, opposite to the general arid trend but consistent with modelled southward displacement of the jet stream. The Arctic forest limit was shifted slighly north at 6000 14C yr bp in some sectors, but not in all. Northern temperate forest zones were generally shifted greater distances north. Warmer winters as well as summers in several regions are required to explain these shifts. Temperate deciduous forests in Europe were greatly extended, into the Mediterranean region as well as to the north. Steppe encroached on forest biomes in interior North America, but not in central Asia. Enhanced monsoons extended forest biomes in China inland and Sahelian vegetation into the Sahara while the African tropical rain forest was also reduced, consistent with a modelled northward shift of the ITCZ and a more seasonal climate in the equatorial zone. Palaeobiome maps show the outcome of separate, independent migrations of plant taxa in response to climate change. The average composition of biomes at LGM was often markedly different from today. Refugia for the temperate deciduous and tropical rain forest biomes may have existed offshore at LGM, but their characteristic taxa also persisted as components of other biomes. Examples include temperate deciduous trees that survived in cool mixed forest in eastern Europe, and tropical evergreen trees that survived in tropical seasonal forest in Africa. The sequence of biome shifts during a glacial-interglacial cycle may help account for some disjunct distributions of plant taxa. For example, the now-arid Saharan mountains may have linked Mediterranean and African tropical montane floras during enhanced monsoon regimes. Major changes in physical land-surface conditions, shown by the palaeobiome data, have implications for the global climate. The data can be used directly to evaluate the output of coupled atmosphere-biosphere models. The data could also be objectively generalized to yield realistic gridded land-surface maps, for use in sensitivity experiments with atmospheric models. Recent analyses of vegetation-climate feedbacks have focused on the hypothesized positive feedback effects of climate-induced vegetation changes in the Sahara/Sahel region and the Arctic during the mid-Holocene. However, a far wider spectrum of interactions potentially exists and could be investigated, using these data, both for 6000 14C yr bp and for the LGM.
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.