916 resultados para refined multiscale entropy
Resumo:
There has been much recent research into extracting useful diagnostic features from the electrocardiogram with numerous studies claiming impressive results. However, the robustness and consistency of the methods employed in these studies is rarely, if ever, mentioned. Hence, we propose two new methods; a biologically motivated time series derived from consecutive P-wave durations, and a mathematically motivated regularity measure. We investigate the robustness of these two methods when compared with current corresponding methods. We find that the new time series performs admirably as a compliment to the current method and the new regularity measure consistently outperforms the current measure in numerous tests on real and synthetic data.
Resumo:
High velocity oxyfuel (HVOF) thermal spraying is one of the most significant developments in the thermal spray industry since the development of the original plasma spray technique. The first investigation deals with the combustion and discrete particle models within the general purpose commercial CFD code FLUENT to solve the combustion of kerosene and couple the motion of fuel droplets with the gas flow dynamics in a Lagrangian fashion. The effects of liquid fuel droplets on the thermodynamics of the combusting gas flow are examined thoroughly showing that combustion process of kerosene is independent on the initial fuel droplet sizes. The second analysis copes with the full water cooling numerical model, which can assist on thermal performance optimisation or to determine the best method for heat removal without the cost of building physical prototypes. The numerical results indicate that the water flow rate and direction has noticeable influence on the cooling efficiency but no noticeable effect on the gas flow dynamics within the thermal spraying gun. The third investigation deals with the development and implementation of discrete phase particle models. The results indicate that most powder particles are not melted upon hitting the substrate to be coated. The oxidation model confirms that HVOF guns can produce metallic coating with low oxidation within the typical standing-off distance about 30cm. Physical properties such as porosity, microstructure, surface roughness and adhesion strength of coatings produced by droplet deposition in a thermal spray process are determined to a large extent by the dynamics of deformation and solidification of the particles impinging on the substrate. Therefore, is one of the objectives of this study to present a complete numerical model of droplet impact and solidification. The modelling results show that solidification of droplets is significantly affected by the thermal contact resistance/substrate surface roughness.
Resumo:
The preparation and characterization of two new neutral ferric complexes with desolvation-induced discontinuous spin-state transformation above room temperature are reported. The compounds, Fe(Hthpy)(thpy).CH3OH.3H2O (1) and Fe(Hmthpy)(mthpy).2H2O (2), are low-spin (LS) at room temperature and below, whereas their nonsolvated forms are high-spin (HS), exhibiting zero-field splitting. In these complexes, Hthpy, Hmthpy, and thpy, mthpy are the deprotonated forms of pyridoxal thiosemicarbazone and pyridoxal methylthiosemicarbazone, respectively; each is an O,N,S-tridentate ligand. The molecular structures have been determined at 100(1) K using single-crystal X-ray diffraction techniques and resulted in a triclinic system (space group P1) and monoclinic unit cell (space group P21/c) for 1 and 2, respectively. Structures were refined to the final error indices, where RF = 0.0560 for 1 and RF = 0.0522 for 2. The chemical inequivalence of the ligands was clearly established, for the "extra" hydrogen atom on the monodeprotonated ligands (Hthpy, Hmthpy) was found to be bound to the nitrogen of the pyridine ring. The ligands are all of the thiol form; the doubly deprotonated chelates (thpy, mthpy) have C-S bond lengths slightly longer than those of the singly deprotonated forms. There is a three-dimensional network of hydrogen bonds in both compounds. The discontinuous spin-state transformation is accompanied with liberation of solvate molecules. This is evidenced also from DSC analysis. Heat capacity data for the LS and HS phases are tabulated at selected temperatures, the values of the enthalpy and entropy changes connected with the change of spin state were reckoned at DeltaH = 12.5 0.3 kJ mol-1 and DeltaS = 33.3 0.8 J mol-1 K-1, respectively, for 1 and DeltaH = 6.5 0.3 kJ mol-1 and DeltaS = 17.6 0.8 J mol-1 K-1, respectively, for 2
Resumo:
In this study, a new entropy measure known as kernel entropy (KerEnt), which quantifies the irregularity in a series, was applied to nocturnal oxygen saturation (SaO 2) recordings. A total of 96 subjects suspected of suffering from sleep apnea-hypopnea syndrome (SAHS) took part in the study: 32 SAHS-negative and 64 SAHS-positive subjects. Their SaO 2 signals were separately processed by means of KerEnt. Our results show that a higher degree of irregularity is associated to SAHS-positive subjects. Statistical analysis revealed significant differences between the KerEnt values of SAHS-negative and SAHS-positive groups. The diagnostic utility of this parameter was studied by means of receiver operating characteristic (ROC) analysis. A classification accuracy of 81.25% (81.25% sensitivity and 81.25% specificity) was achieved. Repeated apneas during sleep increase irregularity in SaO 2 data. This effect can be measured by KerEnt in order to detect SAHS. This non-linear measure can provide useful information for the development of alternative diagnostic techniques in order to reduce the demand for conventional polysomnography (PSG). © 2011 IEEE.
Resumo:
Ernst Mach observed that light or dark bands could be seen at abrupt changes of luminance gradient in the absence of peaks or troughs in luminance. Many models of feature detection share the idea that bars, lines, and Mach bands are found at peaks and troughs in the output of even-symmetric spatial filters. Our experiments assessed the appearance of Mach bands (position and width) and the probability of seeing them on a novel set of generalized Gaussian edges. Mach band probability was mainly determined by the shape of the luminance profile and increased with the sharpness of its corners, controlled by a single parameter (n). Doubling or halving the size of the images had no significant effect. Variations in contrast (20%-80%) and duration (50-300 ms) had relatively minor effects. These results rule out the idea that Mach bands depend simply on the amplitude of the second derivative, but a multiscale model, based on Gaussian-smoothed first- and second-derivative filtering, can account accurately for the probability and perceived spatial layout of the bands. A key idea is that Mach band visibility depends on the ratio of second- to first-derivative responses at peaks in the second-derivative scale-space map. This ratio is approximately scale-invariant and increases with the sharpness of the corners of the luminance ramp, as observed. The edges of Mach bands pose a surprisingly difficult challenge for models of edge detection, but a nonlinear third-derivative operation is shown to predict the locations of Mach band edges strikingly well. Mach bands thus shed new light on the role of multiscale filtering systems in feature coding. © 2012 ARVO.
Resumo:
Fluctuations of liquids at the scales where the hydrodynamic and atomistic descriptions overlap are considered. The importance of these fluctuations for atomistic motions is discussed and examples of their accurate modelling with a multi-space-time-scale fluctuating hydrodynamics scheme are provided. To resolve microscopic details of liquid systems, including biomolecular solutions, together with macroscopic fluctuations in space-time, a novel hybrid atomistic-fluctuating hydrodynamics approach is introduced. For a smooth transition between the atomistic and continuum representations, an analogy with two-phase hydrodynamics is used that leads to a strict preservation of macroscopic mass and momentum conservation laws. Examples of numerical implementation of the new hybrid approach for the multiscale simulation of liquid argon in equilibrium conditions are provided. © 2014 The Author(s) Published by the Royal Society.
Resumo:
Biodiesel is fast becoming one of the key transport fuels as the world endeavours to reduce its carbon footprint and find viable alternatives to oil derived fuels. Research in the field is currently focusing on more efficient ways to produce biodiesel, with the most promising avenue of research looking into the use of heterogeneous catalysis. This article presents a framework for kinetic reaction and diffusive transport modelling of the heterogeneously catalysed transesterification of triglycerides into fatty acid methyl esters (FAMEs), unveiled by a model system of tributyrin transesterification in the presence of MgO catalysts. In particular, the paper makes recommendations on multicomponent diffusion calculations such as the diffusion coefficients and molar fluxes from infinite dilution diffusion coefficients using the Wilke and Chang correlation, intrinsic reaction kinetic studies using the Eley-Rideal kinetic mechanism with methanol adsorption as the rate determining steps and multiscale reaction-diffusion process simulation between catalytic porous and bulk reactor scales. © 2013 The Royal Society of Chemistry.
Resumo:
Multiscale systems that are characterized by a great range of spatial–temporal scales arise widely in many scientific domains. These range from the study of protein conformational dynamics to multiphase processes in, for example, granular media or haemodynamics, and from nuclear reactor physics to astrophysics. Despite the diversity in subject areas and terminology, there are many common challenges in multiscale modelling, including validation and design of tools for programming and executing multiscale simulations. This Theme Issue seeks to establish common frameworks for theoretical modelling, computing and validation, and to help practical applications to benefit from the modelling results. This Theme Issue has been inspired by discussions held during two recent workshops in 2013: ‘Multiscale modelling and simulation’ at the Lorentz Center, Leiden (http://www.lorentzcenter.nl/lc/web/2013/569/info.php3?wsid=569&venue=Snellius), and ‘Multiscale systems: linking quantum chemistry, molecular dynamics and microfluidic hydrodynamics’ at the Royal Society Kavli Centre. The objective of both meetings was to identify common approaches for dealing with multiscale problems across different applications in fluid and soft matter systems. This was achieved by bringing together experts from several diverse communities.
Resumo:
A multiscale Molecular Dynamics/Hydrodynamics implementation of the 2D Mercedes Benz (MB or BN2D) [1] water model is developed and investigated. The concept and the governing equations of multiscale coupling together with the results of the two-way coupling implementation are reported. The sensitivity of the multiscale model for obtaining macroscopic and microscopic parameters of the system, such as macroscopic density and velocity fluctuations, radial distribution and velocity autocorrelation functions of MB particles, is evaluated. Critical issues for extending the current model to large systems are discussed.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92D10, 92D30, 94A17, 62L10.
Resumo:
2010 Mathematics Subject Classification: 94A17.
Resumo:
In this paper, we focus on the design of bivariate EDAs for discrete optimization problems and propose a new approach named HSMIEC. While the current EDAs require much time in the statistical learning process as the relationships among the variables are too complicated, we employ the Selfish gene theory (SG) in this approach, as well as a Mutual Information and Entropy based Cluster (MIEC) model is also set to optimize the probability distribution of the virtual population. This model uses a hybrid sampling method by considering both the clustering accuracy and clustering diversity and an incremental learning and resample scheme is also set to optimize the parameters of the correlations of the variables. Compared with several benchmark problems, our experimental results demonstrate that HSMIEC often performs better than some other EDAs, such as BMDA, COMIT, MIMIC and ECGA. © 2009 Elsevier B.V. All rights reserved.
Resumo:
A dolgozatban a döntéselméletben fontos szerepet játszó páros összehasonlítás mátrix prioritásvektorának meghatározására új megközelítést alkalmazunk. Az A páros összehasonlítás mátrix és a prioritásvektor által definiált B konzisztens mátrix közötti eltérést a Kullback-Leibler relatív entrópia-függvény segítségével mérjük. Ezen eltérés minimalizálása teljesen kitöltött mátrix esetében konvex programozási feladathoz vezet, nem teljesen kitöltött mátrix esetében pedig egy fixpont problémához. Az eltérésfüggvényt minimalizáló prioritásvektor egyben azzal a tulajdonsággal is rendelkezik, hogy az A mátrix elemeinek összege és a B mátrix elemeinek összege közötti különbség éppen az eltérésfüggvény minimumának az n-szerese, ahol n a feladat mérete. Így az eltérésfüggvény minimumának értéke két szempontból is lehet alkalmas az A mátrix inkonzisztenciájának a mérésére. _____ In this paper we apply a new approach for determining a priority vector for the pairwise comparison matrix which plays an important role in Decision Theory. The divergence between the pairwise comparison matrix A and the consistent matrix B defined by the priority vector is measured with the help of the Kullback-Leibler relative entropy function. The minimization of this divergence leads to a convex program in case of a complete matrix, leads to a fixed-point problem in case of an incomplete matrix. The priority vector minimizing the divergence also has the property that the difference of the sums of elements of the matrix A and the matrix B is n times the minimum of the divergence function where n is the dimension of the problem. Thus we developed two reasons for considering the value of the minimum of the divergence as a measure of inconsistency of the matrix A.
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^