932 resultados para multiscale entropy
Resumo:
Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.
Resumo:
Photoacoustic tomography (PAT) of genetically encoded probes allows for imaging of targeted biological processes deep in tissues with high spatial resolution; however, high background signals from blood can limit the achievable detection sensitivity. Here we describe a reversibly switchable nonfluorescent bacterial phytochrome for use in multiscale photoacoustic imaging, BphP1, with the most red-shifted absorption among genetically encoded probes. BphP1 binds a heme-derived biliverdin chromophore and is reversibly photoconvertible between red and near-infrared light-absorption states. We combined single-wavelength PAT with efficient BphP1 photoswitching, which enabled differential imaging with substantially decreased background signals, enhanced detection sensitivity, increased penetration depth and improved spatial resolution. We monitored tumor growth and metastasis with ∼ 100-μm resolution at depths approaching 10 mm using photoacoustic computed tomography, and we imaged individual cancer cells with a suboptical-diffraction resolution of ∼ 140 nm using photoacoustic microscopy. This technology is promising for biomedical studies at several scales.
Resumo:
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.
Resumo:
The problem addressed concerns the determination of the average numberof successive attempts of guessing a word of a certain length consisting of letters withgiven probabilities of occurrence. Both first- and second-order approximations to a naturallanguage are considered. The guessing strategy used is guessing words in decreasing orderof probability. When word and alphabet sizes are large, approximations are necessary inorder to estimate the number of guesses. Several kinds of approximations are discusseddemonstrating moderate requirements regarding both memory and central processing unit(CPU) time. When considering realistic sizes of alphabets and words (100), the numberof guesses can be estimated within minutes with reasonable accuracy (a few percent) andmay therefore constitute an alternative to, e.g., various entropy expressions. For manyprobability distributions, the density of the logarithm of probability products is close to anormal distribution. For those cases, it is possible to derive an analytical expression for theaverage number of guesses. The proportion of guesses needed on average compared to thetotal number decreases almost exponentially with the word length. The leading term in anasymptotic expansion can be used to estimate the number of guesses for large word lengths.Comparisons with analytical lower bounds and entropy expressions are also provided.
Resumo:
This Note aims at presenting a simple and efficient procedure to derive the structure of high-order corrector estimates for the homogenization limit applied to a semi-linear elliptic equation posed in perforated domains. Our working technique relies on monotone iterations combined with formal two-scale homogenization asymptotics. It can be adapted to handle more complex scenarios including for instance nonlinearities posed at the boundary of perforations and the vectorial case, when the model equations are coupled only through the nonlinear production terms.
Resumo:
The use of human brain electroencephalography (EEG) signals for automatic person identi cation has been investigated for a decade. It has been found that the performance of an EEG-based person identication system highly depends on what feature to be extracted from multi-channel EEG signals. Linear methods such as Power Spectral Density and Autoregressive Model have been used to extract EEG features. However these methods assumed that EEG signals are stationary. In fact, EEG signals are complex, non-linear, non-stationary, and random in nature. In addition, other factors such as brain condition or human characteristics may have impacts on the performance, however these factors have not been investigated and evaluated in previous studies. It has been found in the literature that entropy is used to measure the randomness of non-linear time series data. Entropy is also used to measure the level of chaos of braincomputer interface systems. Therefore, this thesis proposes to study the role of entropy in non-linear analysis of EEG signals to discover new features for EEG-based person identi- cation. Five dierent entropy methods including Shannon Entropy, Approximate Entropy, Sample Entropy, Spectral Entropy, and Conditional Entropy have been proposed to extract entropy features that are used to evaluate the performance of EEG-based person identication systems and the impacts of epilepsy, alcohol, age and gender characteristics on these systems. Experiments were performed on the Australian EEG and Alcoholism datasets. Experimental results have shown that, in most cases, the proposed entropy features yield very fast person identication, yet with compatible accuracy because the feature dimension is low. In real life security operation, timely response is critical. The experimental results have also shown that epilepsy, alcohol, age and gender characteristics have impacts on the EEG-based person identication systems.
Resumo:
Hybrid halide perovskites have emerged as promising active constituents of next generation solution processable optoelectronic devices. During their assembling process, perovskite components undergo very complex dynamic equilibria starting in solution and progressing throughout film formation. Finding a methodology to control and affect these equilibria, responsible for the unique morphological diversity observed in perovskite films, constitutes a fundamental step towards a reproducible material processability. Here we propose the exploitation of polymer matrices as cooperative assembling components of novel perovskite CH3NH3PbI3 : polymer composites, in which the control of the chemical interactions in solution allows a predictable tuning of the final film morphology. We reveal that the nature of the interactions between perovskite precursors and polymer functional groups, probed by Nuclear Magnetic Resonance (NMR) spectroscopy and Dynamic Light Scattering (DLS) techniques, allows the control of aggregates in solution whose characteristics are strictly maintained in the solid film, and permits the formation of nanostructures that are inaccessible to conventional perovskite depositions. These results demonstrate how the fundamental chemistry of perovskite precursors in solution has a paramount influence on controlling and monitoring the final morphology of CH3NH3PbI3 (MAPbI3) thin films, foreseeing the possibility of designing perovskite : polymer composites targeting diverse optoelectronic applications.
Resumo:
Multiscale reinforcement, using carbon microfibers and multi-walled carbon nanotubes, of polymer matrix composites manufactured by twin-screw extrusion is investigated for enhanced mechanical and thermal properties with an emphasis on the use of a diverging flow in the die for fluid mechanical fiber manipulation. Using fillers at different length scales (microscale and nanoscale), synergistic combinations have been identified to produce distinct mechanical and thermal behavior. Fiber manipulation has been demonstrated experimentally and computationally, and has been shown to enhance thermal conductivity significantly. Finally, a new physics driven predictive model for thermal conductivity has been developed based on fiber orientation during flow, which is shown to successfully capture composite thermal conductivity.
Resumo:
The main purpose of this study is to present an alternative benchmarking approach that can be used by national regulators of utilities. It is widely known that the lack of sizeable data sets limits the choice of the benchmarking method and the specification of the model to set price controls within incentive-based regulation. Ill-posed frontier models are the problem that some national regulators have been facing. Maximum entropy estimators are useful in the estimation of such ill-posed models, in particular in models exhibiting small sample sizes, collinearity and non-normal errors, as well as in models where the number of parameters to be estimated exceeds the number of observations available. The empirical study involves a sample data used by the Portuguese regulator of the electricity sector to set the parameters for the electricity distribution companies in the regulatory period of 2012-2014. DEA and maximum entropy methods are applied and the efficiency results are compared.
Resumo:
This paper deals with the development and the analysis of asymptotically stable and consistent schemes in the joint quasi-neutral and fluid limits for the collisional Vlasov-Poisson system. In these limits, the classical explicit schemes suffer from time step restrictions due to the small plasma period and Knudsen number. To solve this problem, we propose a new scheme stable for choices of time steps independent from the small scales dynamics and with comparable computational cost with respect to standard explicit schemes. In addition, this scheme reduces automatically to consistent discretizations of the underlying asymptotic systems. In this first work on this subject, we propose a first order in time scheme and we perform a relative linear stability analysis to deal with such problems. The framework we propose permits to extend this approach to high order schemes in the next future. We finally show the capability of the method in dealing with small scales through numerical experiments.
Resumo:
This paper presents a semi-parametric Algorithm for parsing football video structures. The approach works on a two interleaved based process that closely collaborate towards a common goal. The core part of the proposed method focus perform a fast automatic football video annotation by looking at the enhance entropy variance within a series of shot frames. The entropy is extracted on the Hue parameter from the HSV color system, not as a global feature but in spatial domain to identify regions within a shot that will characterize a certain activity within the shot period. The second part of the algorithm works towards the identification of dominant color regions that could represent players and playfield for further activity recognition. Experimental Results shows that the proposed football video segmentation algorithm performs with high accuracy.