85 resultados para physically based modeling
Resumo:
This paper proposes a continuous time Markov chain (CTMC) based sequential analytical approach for composite generation and transmission systems reliability assessment. The basic idea is to construct a CTMC model for the composite system. Based on this model, sequential analyses are performed. Various kinds of reliability indices can be obtained, including expectation, variance, frequency, duration and probability distribution. In order to reduce the dimension of the state space, traditional CTMC modeling approach is modified by merging all high order contingencies into a single state, which can be calculated by Monte Carlo simulation (MCS). Then a state mergence technique is developed to integrate all normal states to further reduce the dimension of the CTMC model. Moreover, a time discretization method is presented for the CTMC model calculation. Case studies are performed on the RBTS and a modified IEEE 300-bus test system. The results indicate that sequential reliability assessment can be performed by the proposed approach. Comparing with the traditional sequential Monte Carlo simulation method, the proposed method is more efficient, especially in small scale or very reliable power systems.
Resumo:
Visual salience is an intriguing phenomenon observed in biological neural systems. Numerous attempts have been made to model visual salience mathematically using various feature contrasts, either locally or globally. However, these algorithmic models tend to ignore the problem’s biological solutions, in which visual salience appears to arise during the propagation of visual stimuli along the visual cortex. In this paper, inspired by the conjecture that salience arises from deep propagation along the visual cortex, we present a Deep Salience model where a multi-layer model based on successive Markov random fields (sMRF) is proposed to analyze the input image successively through its deep belief propagation. As a result, the foreground object can be automatically separated from the background in a fully unsupervised way. Experimental evaluation on the benchmark dataset validated that our Deep Salience model can consistently outperform eleven state-of-the-art salience models, yielding the higher rates in the precision-recall tests and attaining the best F-measure and mean-square error in the experiments.
Resumo:
Physically Unclonable Functions (PUFs), exploit inherent manufacturing variations and present a promising solution for hardware security. They can be used for key storage, authentication and ID generations. Low power cryptographic design is also very important for security applications. However, research to date on digital PUF designs, such as Arbiter PUFs and RO PUFs, is not very efficient. These PUF designs are difficult to implement on Field Programmable Gate Arrays (FPGAs) or consume many FPGA hardware resources. In previous work, a new and efficient PUF identification generator was presented for FPGA. The PUF identification generator is designed to fit in a single slice per response bit by using a 1-bit PUF identification generator cell formed as a hard-macro. In this work, we propose an ultra-compact PUF identification generator design. It is implemented on ten low-cost Xilinx Spartan-6 FPGA LX9 microboards. The resource utilization is only 2.23%, which, to the best of the authors' knowledge, is the most compact and robust FPGA-based PUF identification generator design reported to date. This PUF identification generator delivers a stable range of uniqueness of around 50% and good reliability between 85% and 100%.
Resumo:
Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R(2) increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase.
Resumo:
bservations of the Rossiter–McLaughlin (RM) effect provide information on star–planet alignments, which can inform planetary migration and evolution theories. Here, we go beyond the classical RM modeling and explore the impact of a convective blueshift that varies across the stellar disk and non-Gaussian stellar photospheric profiles. We simulated an aligned hot Jupiter with a four-day orbit about a Sun-like star and injected center-to-limb velocity (and profile shape) variations based on radiative 3D magnetohydrodynamic simulations of solar surface convection. The residuals between our modeling and classical RM modeling were dependent on the intrinsic profile width and v sin i; the amplitude of the residuals increased with increasing v sin i and with decreasing intrinsic profile width. For slowly rotating stars the center-to-limb convective variation dominated the residuals (with amplitudes of 10 s of cm s−1 to ~1 m s−1); however, for faster rotating stars the dominant residual signature was due a non-Gaussian intrinsic profile (with amplitudes from 0.5 to 9 m s−1). When the impact factor was 0, neglecting to account for the convective center-to-limb variation led to an uncertainty in the obliquity of ~10°–20°, even though the true v sin i was known. Additionally, neglecting to properly model an asymmetric intrinsic profile had a greater impact for more rapidly rotating stars (e.g., v sin i = 6 km s−1) and caused systematic errors on the order of ~20° in the measured obliquities. Hence, neglecting the impact of stellar surface convection may bias star–planet alignment measurements and consequently theories on planetary migration and evolution.
Resumo:
Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.
Resumo:
We present a large data set of high-cadence dMe flare light curves obtained with custom continuum filters on the triple-beam, high-speed camera system ULTRACAM. The measurements provide constraints for models of the near-ultraviolet (NUV) and optical continuum spectral evolution on timescales of ≈1 s. We provide a robust interpretation of the flare emission in the ULTRACAM filters using simultaneously obtained low-resolution spectra during two moderate-sized flares in the dM4.5e star YZ CMi. By avoiding the spectral complexity within the broadband Johnson filters, the ULTRACAM filters are shown to characterize bona fide continuum emission in the NUV, blue, and red wavelength regimes. The NUV/blue flux ratio in flares is equivalent to a Balmer jump ratio, and the blue/red flux ratio provides an estimate for the color temperature of the optical continuum emission. We present a new “color-color” relationship for these continuum flux ratios at the peaks of the flares. Using the RADYN and RH codes, we interpret the ULTRACAM filter emission using the dominant emission processes from a radiative-hydrodynamic flare model with a high nonthermal electron beam flux, which explains a hot, T ≈ 104 K, color temperature at blue-to-red optical wavelengths and a small Balmer jump ratio as observed in moderate-sized and large flares alike. We also discuss the high time resolution, high signal-to-noise continuum color variations observed in YZ CMi during a giant flare, which increased the NUV flux from this star by over a factor of 100. Based on observations obtained with the Apache Point Observatory 3.5 m telescope, which is owned and operated by the Astrophysical Research Consortium, based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofsica de Canarias, and observations, and based on observations made with the ESO Telescopes at the La Silla Paranal Observatory under programme ID 085.D-0501(A).