63 resultados para Multi-model inference
Resumo:
This paper presents a statistical model for the thermal behaviour of the line model based on lab tests and field measurements. This model is based on Partial Least Squares (PLS) multi regression and is used for the Dynamic Line Rating (DLR) in a wind intensive area. DLR provides extra capacity to the line, over the traditional seasonal static rating, which makes it possible to defer the need for reinforcement the existing network or building new lines. The proposed PLS model has a number of appealing features; the model is linear, so it is straightforward to use for predicting the line rating for future periods using the available weather forecast. Unlike the available physical models, the proposed model does not require any physical parameters of the line, which avoids the inaccuracies resulting from the errors and/or variations in these parameters. The developed model is compared with physical model, the Cigre model, and has shown very good accuracy in predicting the conductor temperature as well as in determining the line rating for future time periods.
Resumo:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
Resumo:
The expanding remnant from SN 1987A is an excellent laboratory for investigating the physics of supernovae explosions. There is still a large number of outstanding questions, such as the reason for the asymmetric radio morphology, the structure of the pre-supernova environment, and the efficiency of particle acceleration at the supernova shock. We explore these questions using three-dimensional simulations of the expanding remnant between days 820 and 10,000 after the supernova. We combine a hydrodynamical simulation with semi-analytic treatments of diffusive shock acceleration and magnetic field amplification to derive radio emission as part of an inverse problem. Simulations show that an asymmetric explosion, combined with magnetic field amplification at the expanding shock, is able to replicate the persistent one-sided radio morphology of the remnant. We use an asymmetric Truelove & McKee progenitor with an envelope mass of 10 M-circle dot and an energy of 1.5 x 10(44) J. A termination shock in the progenitor's stellar wind at a distance of 0 ''.43-0 ''.51 provides a good fit to the turn on of radio emission around day 1200. For the H II region, a minimum distance of 0 ''.63 +/- 0 ''.01 and maximum particle number density of (7.11 +/- 1.78) x 10(7) m(-3) produces a good fit to the evolving average radius and velocity of the expanding shocks from day 2000 to day 7000 after explosion. The model predicts a noticeable reduction, and possibly a temporary reversal, in the asymmetric radio morphology of the remnant after day 7000, when the forward shock left the eastern lobe of the equatorial ring.
Resumo:
High Voltage Direct Current (HVDC) lines allow large quantities of power to be
transferred between two points in an electrical power system. A Multi-Terminal HVDC (MTDC) grid consists of a meshed network of HVDC lines, and this allows energy reserves to be shared between a number of AC areas in an efficient manner. Secondary Frequency Control (SFC) algorithms return the frequencies in areas connected by AC or DC lines to their original setpoints after Primary Frequency Controllers have been called following a contingency. Where multiple
TSOs are responsible for different parts of a MTDC grid it may not be possible to implement SFC from a centralised location. Thus, in this paper a simple gain based distributed Model Predictive Control strategy is proposed for Secondary Frequency Control of MTDC grids which allows TSOs to cooperatively perform SFC without the need for centralised coordination.
Resumo:
In this paper we extend the minimum-cost network flow approach to multi-target tracking, by incorporating a motion model, allowing the tracker to better cope with longterm occlusions and missed detections. In our new method, the tracking problem is solved iteratively: Firstly, an initial tracking solution is found without the help of motion information. Given this initial set of tracklets, the motion at each detection is estimated, and used to refine the tracking solution.
Finally, special edges are added to the tracking graph, allowing a further revised tracking solution to be found, where distant tracklets may be linked based on motion similarity. Our system has been tested on the PETS S2.L1 and Oxford town-center sequences, outperforming the baseline system, and achieving results comparable with the current state of the art.
Resumo:
We study the sensitivity of a MAP configuration of a discrete probabilistic graphical model with respect to perturbations of its parameters. These perturbations are global, in the sense that simultaneous perturbations of all the parameters (or any chosen subset of them) are allowed. Our main contribution is an exact algorithm that can check whether the MAP configuration is robust with respect to given perturbations. Its complexity is essentially the same as that of obtaining the MAP configuration itself, so it can be promptly used with minimal effort. We use our algorithm to identify the largest global perturbation that does not induce a change in the MAP configuration, and we successfully apply this robustness measure in two practical scenarios: the prediction of facial action units with posed images and the classification of multiple real public data sets. A strong correlation between the proposed robustness measure and accuracy is verified in both scenarios.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
Resumo:
A credal network is a graph-theoretic model that represents imprecision in joint probability distributions. An inference in a credal net aims at computing an interval for the probability of an event of interest. Algorithms for inference in credal networks can be divided into exact and approximate. The selection of an algorithm is based on a trade off that ponders how much time someone wants to spend in a particular calculation against the quality of the computed values. This paper presents an algorithm, called IDS, that combines exact and approximate methods for computing inferences in polytree-shaped credal networks. The algorithm provides an approach to trade time and precision when making inferences in credal nets
Resumo:
This paper describes a novel doped titania immobilised thin film multi tubular photoreactor which has been developed for use with liquid, vapour or gas phase media. In designing photocatalytic reactors measuring active surface area of photocatalyst within the unit is one of the critical design parameters. This dictate greatly limits the applicability of any semi-conductor photocatalyst in industrial applications, as a large surface area equates to a powder catalyst. This demonstration of a thin film coating, doped with a rare earth element, novel photoreactor design produces a photocatalytic degradation of a model pollutant (methyl orange) which displayed a comparable degradation achieved with P25 TiO2. The use of lanthanide doping is reported here in the titania sol gel as it is thought to increase the electron hole separation therefore widening the potential useful wavelengths within the electromagnetic spectrum. Increasing doping from 0.5% to 1.0% increased photocatalytic degradation by ∼17% under visible irradiation. A linear relationship has been seen between increasing reactor volume and degradation which would not normally be observed in a typical suspended reactor system. © 2012 Elsevier B.V.
Resumo:
Sonoluminescence (SL) involves the conversion of mechanical [ultra]sound energy into light. Whilst the phenomenon is invariably inefficient, typically converting just 10-4 of the incident acoustic energy into photons, it is nonetheless extraordinary, as the resultant energy density of the emergent photons exceeds that of the ultrasonic driving field by a factor of some 10 12. Sonoluminescence has specific [as yet untapped] advantages in that it can be effected at remote locations in an essentially wireless format. The only [usual] requirement is energy transduction via the violent oscillation of microscopic bubbles within the propagating medium. The dependence of sonoluminescent output on the generating sound field's parameters, such as pulse duration, duty cycle, and position within the field, have been observed and measured previously, and several relevant aspects are discussed presently. We also extrapolate the logic from a recently published analysis relating to the ensuing dynamics of bubble 'clouds' that have been stimulated by ultrasound. Here, the intention was to develop a relevant [yet computationally simplistic] model that captured the essential physical qualities expected from real sonoluminescent microbubble clouds. We focused on the inferred temporal characteristics of SL light output from a population of such bubbles, subjected to intermediate [0.5-2MPa] ultrasonic pressures. Finally, whilst direct applications for sonoluminescent light output are thought unlikely in the main, we proceed to frame the state-of-the- art against several presently existing technologies that could form adjunct approaches with distinct potential for enhancing present sonoluminescent light output that may prove useful in real world [biomedical] applications.
Resumo:
Side-channel analysis of cryptographic systems can allow for the recovery of secret information by an adversary even where the underlying algorithms have been shown to be provably secure. This is achieved by exploiting the unintentional leakages inherent in the underlying implementation of the algorithm in software or hardware. Within this field of research, a class of attacks known as profiling attacks, or more specifically as used here template attacks, have been shown to be extremely efficient at extracting secret keys. Template attacks assume a strong adversarial model, in that an attacker has an identical device with which to profile the power consumption of various operations. This can then be used to efficiently attack the target device. Inherent in this assumption is that the power consumption across the devices under test is somewhat similar. This central tenet of the attack is largely unexplored in the literature with the research community generally performing the profiling stage on the same device as being attacked. This is beneficial for evaluation or penetration testing as it is essentially the best case scenario for an attacker where the model built during the profiling stage matches exactly that of the target device, however it is not necessarily a reflection on how the attack will work in reality. In this work, a large scale evaluation of this assumption is performed, comparing the key recovery performance across 20 identical smart-cards when performing a profiling attack.
Combining multi-band and frequency-filtering techniques for speech recognition in noisy environments
Resumo:
While current speech recognisers give acceptable performance in carefully controlled environments, their performance degrades rapidly when they are applied in more realistic situations. Generally, the environmental noise may be classified into two classes: the wide-band noise and narrow band noise. While the multi-band model has been shown to be capable of dealing with speech corrupted by narrow-band noise, it is ineffective for wide-band noise. In this paper, we suggest a combination of the frequency-filtering technique with the probabilistic union model in the multi-band approach. The new system has been tested on the TIDIGITS database, corrupted by white noise, noise collected from a railway station, and narrow-band noise, respectively. The results have shown that this approach is capable of dealing with noise of narrow-band or wide-band characteristics, assuming no knowledge about the noisy environment.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.
Resumo:
Distributed control techniques can allow Transmission System Operators (TSOs) to coordinate their responses via TSO-TSO communication, providing a level of control that lies between that of centralised control and communication free decentralised control of interconnected power systems. Recently the Plug and Play Model Predictive Control (PnPMPC) toolbox has been developed in order to allow practitioners to design distributed controllers based on tube-MPC techniques. In this paper, some initial results using the PnPMPC toolbox for the design of distributed controllers to enhance AGC in AC areas connected to Multi-Terminal HVDC (MTDC) grids, are illustrated, in order to evaluate the feasibility of applying PnPMPC for this purpose.