140 resultados para Internal algorithms
Resumo:
We discuss the modelling of dielectric responses of amorphous biological samples. Such samples are commonly encountered in impedance spectroscopy studies as well as in UV, IR, optical and THz transient spectroscopy experiments and in pump-probe studies. In many occasions, the samples may display quenched absorption bands. A systems identification framework may be developed to provide parsimonious representations of such responses. To achieve this, it is appropriate to augment the standard models found in the identification literature to incorporate fractional order dynamics. Extensions of models using the forward shift operator, state space models as well as their non-linear Hammerstein-Wiener counterpart models are highlighted. We also discuss the need to extend the theory of electromagnetically excited networks which can account for fractional order behaviour in the non-linear regime by incorporating nonlinear elements to account for the observed non-linearities. The proposed approach leads to the development of a range of new chemometrics tools for biomedical data analysis and classification.
Resumo:
he perspective European Supergrid would consist of an integrated power system network, where electricity demands from one country could be met by generation from another country. This paper makes use of a bi-linear fixed-effects model to analyse the determinants for trading electricity across borders among 34 countries connected by the European Supergrid. The key question that this paper aims to address is the extent to which the privatisation of European electricity markets has brought about higher cross-border trade of electricity. The analysis makes use of distance, price ratios, gate closure times, size of peaks and aggregate demand as standard determinants. Controlling for other standard determinants, it is concluded that privatisation in most cases led to higher power exchange and that the benefits are more significant where privatisation measures have been in place for a longer period.
Resumo:
We examine the internal equity financing of the multinational subsidiary which retains and reinvests its own earnings. Internal equity financing is a type of firm-specific advantage (FSA) along with other traditional FSAs in innovation, research and development, brands and management skills. It also reflects subsidiary-level financial management decision-making. Here we test the contributions of internal equity financing and subsidiary-level financial management decision-making to subsidiary performance, using original survey data from British multinational subsidiaries in six emerging countries in the South East Asia region. Our first finding is that internal equity financing acts as an FSA to improve subsidiary performance. Our second finding is that over 90% of financing sources (including capital investment by the parent firms) in the British subsidiaries come from internal funding. Our third finding is that subsidiary-level financial management decision-making has a statistically significant positive impact on subsidiary performance. Our findings advance the theoretical, empirical and managerial analysis of subsidiary performance in emerging economies.
Resumo:
The quantification of uncertainty is an increasingly popular topic, with clear importance for climate change policy. However, uncertainty assessments are open to a range of interpretations, each of which may lead to a different policy recommendation. In the EQUIP project researchers from the UK climate modelling, statistical modelling, and impacts communities worked together on ‘end-to-end’ uncertainty assessments of climate change and its impacts. Here, we use an experiment in peer review amongst project members to assess variation in the assessment of uncertainties between EQUIP researchers. We find overall agreement on key sources of uncertainty but a large variation in the assessment of the methods used for uncertainty assessment. Results show that communication aimed at specialists makes the methods used harder to assess. There is also evidence of individual bias, which is partially attributable to disciplinary backgrounds. However, varying views on the methods used to quantify uncertainty did not preclude consensus on the consequential results produced using those methods. Based on our analysis, we make recommendations for developing and presenting statements on climate and its impacts. These include the use of a common uncertainty reporting format in order to make assumptions clear; presentation of results in terms of processes and trade-offs rather than only numerical ranges; and reporting multiple assessments of uncertainty in order to elucidate a more complete picture of impacts and their uncertainties. This in turn implies research should be done by teams of people with a range of backgrounds and time for interaction and discussion, with fewer but more comprehensive outputs in which the range of opinions is recorded.
Resumo:
It is often assumed on the basis of single-parcel energetics that compressible effects and conversions with internal energy are negligible whenever typical displacements of fluid parcels are small relative to the scale height of the fluid (defined as the ratio of the squared speed of sound over gravitational acceleration). This paper shows that the above approach is flawed, however, and that a correct assessment of compressible effects and internal energy conversions requires considering the energetics of at least two parcels, or more generally, of mass conserving parcel re-arrangements. As a consequence, it is shown that it is the adiabatic lapse rate and its derivative with respect to pressure, rather than the scale height, which controls the relative importance of compressible effects and internal energy conversions when considering the global energy budget of a stratied fluid. Only when mass conservation is properly accounted for is it possible to explain why available internal energy can account for up to 40 percent of the total available potential energy in the oceans. This is considerably larger than the prediction of single-parcel energetics, according to which this number should be no more than about 2 percent.
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Given a dataset of two-dimensional points in the plane with integer coordinates, the method proposed reduces a set of n points down to a set of s points s ≤ n, such that the convex hull on the set of s points is the same as the convex hull of the original set of n points. The method is O(n). It helps any convex hull algorithm run faster. The empirical analysis of a practical case shows a percentage reduction in points of over 98%, that is reflected as a faster computation with a speedup factor of at least 4.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.
Resumo:
The North Atlantic Ocean subpolar gyre (NA SPG) is an important region for initialising decadal climate forecasts. Climate model simulations and palaeo climate reconstructions have indicated that this region could also exhibit large, internally generated variability on decadal timescales. Understanding these modes of variability, their consistency across models, and the conditions in which they exist, is clearly important for improving the skill of decadal predictions — particularly when these predictions are made with the same underlying climate models. Here we describe and analyse a mode of internal variability in the NA SPG in a state-of-the-art, high resolution, coupled climate model. This mode has a period of 17 years and explains 15–30% of the annual variance in related ocean indices. It arises due to the advection of heat content anomalies around the NA SPG. Anomalous circulation drives the variability in the southern half of the NA SPG, whilst mean circulation and anomalous temperatures are important in the northern half. A negative feedback between Labrador Sea temperatures/densities and those in the North Atlantic Current is identified, which allows for the phase reversal. The atmosphere is found to act as a positive feedback on to this mode via the North Atlantic Oscillation which itself exhibits a spectral peak at 17 years. Decadal ocean density changes associated with this mode are driven by variations in temperature, rather than salinity — a point which models often disagree on and which we suggest may affect the veracity of the underlying assumptions of anomaly-assimilating decadal prediction methodologies.
Resumo:
A theoretically expected consequence of the intensification of the hydrological cycle under global warming is that on average, wet regions get wetter and dry regions get drier (WWDD). Recent studies, however, have found significant discrepancies between the expected pattern of change and observed changes over land. We assess the WWDD theory in four climate models. We find that the reported discrepancy can be traced to two main issues: (1) unforced internal climate variability strongly affects local wetness and dryness trends and can obscure underlying agreement with WWDD, and (2) dry land regions are not constrained to become drier by enhanced moisture divergence since evaporation cannot exceed precipitation over multiannual time scales. Over land, where the available water does not limit evaporation, a “wet gets wetter” signal predominates. On seasonal time scales, where evaporation can exceed precipitation, trends in wet season becoming wetter and dry season becoming drier are also found.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.