974 resultados para Symmetry Ratio Algorithm
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Carbon and nitrogen stable isotope ratios were measured in 157 fish bone collagen samples from 15 different archaeological sites in Belgium which ranged in ages from the 3rd to the 18th c. AD. Due to diagenetic contamination of the burial environment, only 63 specimens produced results with suitable C:N ratios (2.9–3.6). The selected bones encompass a wide spectrum of freshwater, brackish, and marine taxa (N = 18), and this is reflected in the δ13C results (−28.2‰ to −12.9%). The freshwater fish have δ13C values that range from −28.2‰ to −20.2‰, while the marine fish cluster between −15.4‰ and −13.0‰. Eel, a catadromous species (mostly living in freshwater but migrating into the sea to spawn), plots between −24.1‰ and −17.7‰, and the anadromous fish (living in marine environments but migrating into freshwater to spawn) show a mix of freshwater and marine isotopic signatures. The δ15N results also have a large range (7.2‰ to 16.7‰) indicating that these fish were feeding at many different trophic levels in these diverse aquatic environments. The aim of this research is the isotopic characterization of archaeological fish species (ecology, trophic level, migration patterns) and to determine intra-species variation within and between fish populations differing in time and location. Due to the previous lack of archaeological fish isotope data from Northern Europe and Belgium in particular, these results serve as an important ecological backdrop for the future isotopic reconstruction of the diet of human populations dating from the historical period (1st and 2nd millennium AD), where there is zooarchaeological and historical evidence for an increased consumption of marine fish.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.
Resumo:
The enhanced radar return associated with melting snow, ‘the bright band’, can lead to large overestimates of rain rates. Most correction schemes rely on fitting the radar observations to a vertical profile of reflectivity (VPR) which includes the bright band enhancement. Observations show that the VPR is very variable in space and time; large enhancements occur for melting snow, but none for the melting graupel in embedded convection. Applying a bright band VPR correction to a region of embedded convection will lead to a severe underestimate of rainfall. We revive an earlier suggestion that high values of the linear depolarisation ratio (LDR) are an excellent means of detecting when bright band contamination is occurring and that the value of LDR may be used to correct the value of Z in the bright band.
Resumo:
Evolutionary meta-algorithms for pulse shaping of broadband femtosecond duration laser pulses are proposed. The genetic algorithm searching the evolutionary landscape for desired pulse shapes consists of a population of waveforms (genes), each made from two concatenated vectors, specifying phases and magnitudes, respectively, over a range of frequencies. Frequency domain operators such as mutation, two-point crossover average crossover, polynomial phase mutation, creep and three-point smoothing as well as a time-domain crossover are combined to produce fitter offsprings at each iteration step. The algorithm applies roulette wheel selection; elitists and linear fitness scaling to the gene population. A differential evolution (DE) operator that provides a source of directed mutation and new wavelet operators are proposed. Using properly tuned parameters for DE, the meta-algorithm is used to solve a waveform matching problem. Tuning allows either a greedy directed search near the best known solution or a robust search across the entire parameter space.
Resumo:
This paper studies the signalling effect of the consumption−wealth ratio (cay) on German stock returns via vector error correction models (VECMs). The effect of cay on U.S. stock returns has been recently confirmed by Lettau and Ludvigson with a two−stage method. In this paper, performance of the VECMs and the two−stage method are compared in both German and U.S. data. It is found that the VECMs are more suitable to study the effect of cay on stock returns than the two−stage method. Using the Conditional−Subset VECM, cay signals real stock returns and excess returns in both data sets significantly. The estimated coefficient on cay for stock returns turns out to be two times greater in U.S. data than in German data. When the two−stage method is used, cay has no significant effect on German stock returns. Besides, it is also found that cay signals German wealth growth and U.S. income growth significantly.
Resumo:
This paper analyze and study a pervasive computing system in a mining environment to track people based on RFID (radio frequency identification) technology. In first instance, we explain the RFID fundamentals and the LANDMARC (location identification based on dynamic active RFID calibration) algorithm, then we present the proposed algorithm combining LANDMARC and trilateration technique to collect the coordinates of the people inside the mine, next we generalize a pervasive computing system that can be implemented in mining, and finally we show the results and conclusions.
Resumo:
We discuss the modeling of dielectric responses of electromagnetically excited networks which are composed of a mixture of capacitors and resistors. Such networks can be employed as lumped-parameter circuits to model the response of composite materials containing conductive and insulating grains. The dynamics of the excited network systems are studied using a state space model derived from a randomized incidence matrix. Time and frequency domain responses from synthetic data sets generated from state space models are analyzed for the purpose of estimating the fraction of capacitors in the network. Good results were obtained by using either the time-domain response to a pulse excitation or impedance data at selected frequencies. A chemometric framework based on a Successive Projections Algorithm (SPA) enables the construction of multiple linear regression (MLR) models which can efficiently determine the ratio of conductive to insulating components in composite material samples. The proposed method avoids restrictions commonly associated with Archie’s law, the application of percolation theory or Kohlrausch-Williams-Watts models and is applicable to experimental results generated by either time domain transient spectrometers or continuous-wave instruments. Furthermore, it is quite generic and applicable to tomography, acoustics as well as other spectroscopies such as nuclear magnetic resonance, electron paramagnetic resonance and, therefore, should be of general interest across the dielectrics community.
Resumo:
For Northern Hemisphere extra-tropical cyclone activity, the dependency of a potential anthropogenic climate change signal on the identification method applied is analysed. This study investigates the impact of the used algorithm on the changing signal, not the robustness of the climate change signal itself. Using one single transient AOGCM simulation as standard input for eleven state-of-the-art identification methods, the patterns of model simulated present day climatologies are found to be close to those computed from re-analysis, independent of the method applied. Although differences in the total number of cyclones identified exist, the climate change signals (IPCC SRES A1B) in the model run considered are largely similar between methods for all cyclones. Taking into account all tracks, decreasing numbers are found in the Mediterranean, the Arctic in the Barents and Greenland Seas, the mid-latitude Pacific and North America. Changing patterns are even more similar, if only the most severe systems are considered: the methods reveal a coherent statistically significant increase in frequency over the eastern North Atlantic and North Pacific. We found that the differences between the methods considered are largely due to the different role of weaker systems in the specific methods.
Resumo:
Northern Hemisphere cyclone activity is assessed by applying an algorithm for the detection and tracking of synoptic scale cyclones to mean sea level pressure data. The method, originally developed for the Southern Hemisphere, is adapted for application in the Northern Hemisphere winter season. NCEP-Reanalysis data from 1958/59 to 1997/98 are used as input. The sensitivities of the results to particular parameters of the algorithm are discussed for both case studies and from a climatological point of view. Results show that the choice of settings is of major relevance especially for the tracking of smaller scale and fast moving systems. With an appropriate setting the algorithm is capable of automatically tracking different types of cyclones at the same time: Both fast moving and developing systems over the large ocean basins and smaller scale cyclones over the Mediterranean basin can be assessed. The climatology of cyclone variables, e.g., cyclone track density, cyclone counts, intensification rates, propagation speeds and areas of cyclogenesis and -lysis gives detailed information on typical cyclone life cycles for different regions. The lowering of the spatial and temporal resolution of the input data from full resolution T62/06h to T42/12h decreases the cyclone track density and cyclone counts. Reducing the temporal resolution alone contributes to a decline in the number of fast moving systems, which is relevant for the cyclone track density. Lowering spatial resolution alone mainly reduces the number of weak cyclones.
Resumo:
In addition to the Hamiltonian functional itself, non-canonical Hamiltonian dynamical systems generally possess integral invariants known as ‘Casimir functionals’. In the case of the Euler equations for a perfect fluid, the Casimir functionals correspond to the vortex topology, whose invariance derives from the particle-relabelling symmetry of the underlying Lagrangian equations of motion. In a recent paper, Vallis, Carnevale & Young (1989) have presented algorithms for finding steady states of the Euler equations that represent extrema of energy subject to given vortex topology, and are therefore stable. The purpose of this note is to point out a very general method for modifying any Hamiltonian dynamical system into an algorithm that is analogous to those of Vallis etal. in that it will systematically increase or decrease the energy of the system while preserving all of the Casimir invariants. By incorporating momentum into the extremization procedure, the algorithm is able to find steadily-translating as well as steady stable states. The method is applied to a variety of perfect-fluid systems, including Euler flow as well as compressible and incompressible stratified flow.
Resumo:
The effects of forage conservation method on plasma lipids, mammary lipogenesis, and milk fat were examined in 2 complementary experiments. Treatments comprised fresh grass, hay, or untreated (UTS) or formic acid treated silage (FAS) prepared from the same grass sward. Preparation of conserved forages coincided with the collection of samples from cows fed fresh grass. In the first experiment, 5 multiparous Finnish Ayrshire cows (229 d in milk) were used to compare a diet based on fresh grass followed by hay during 2 consecutive 14-d periods, separated by a 5-d transition during which extensively wilted grass was fed. In the second experiment, 5 multiparous Finnish Ayrshire cows (53 d in milk) were assigned to 1 of 2 blocks and allocated treatments according to a replicated 3 × 3 Latin square design, with 14-d periods to compare hay, UTS, and FAS. Cows received 7 or 9 kg/d of the same concentrate in experiments 1 and 2, respectively. Arterial concentrations of triacylglycerol (TAG) and phospholipid were higher in cows fed fresh grass, UTS, and FAS compared with hay. Nonesterified fatty acid (NEFA) concentrations and the relative abundance of 18:2n-6 and 18:3n-3 in TAG of arterial blood were also higher in cows fed fresh grass than conserved forages. On all diets, TAG was the principle source of fatty acids (FA) for milk fat synthesis, whereas mammary extraction of NEFA was negligible, except during zero-grazing, which was associated with a lower, albeit positive calculated energy balance. Mammary FA uptake was higher and the synthesis of 16:0 lower in cows fed fresh grass than hay. Conservation of grass by drying or ensiling had no influence on mammary extraction of TAG and NEFA, despite an increase in milk fat secretion for silages compared with hay and for FAS than UTS. Relative to hay, milk fat from fresh grass contained lower 12:0, 14:0, and 16:0 and higher S3,R7,R11,15-tetramethyl-16:0, cis-9 18:1, trans-11 18:1, cis-9,trans-11 18:2, 18:2n-6, and 18:3n-3 concentrations. Even though conserved forages altered mammary lipogenesis, differences in milk FA composition were relatively minor, other than a higher enrichment of S3,R7,R11,15-tetramethyl-16:0 in milk from silages compared with hay. In conclusion, differences in milk fat composition on fresh grass relative to conserved forages were associated with a lower energy balance, increased uptake of preformed FA, and decreased synthesis of 16:0 de novo in the mammary glands, in the absence of alterations in stearoyl-coenzyme A desaturase activity.