962 resultados para Maximum independent set
Resumo:
Mass balance between metal and electrolytic solution, separated by a moving interface, in stable pit growth results in a set of governing equations which are solved for concentration field and interface position (pit boundary evolution). The interface experiences a jump discontinuity in metal concentration. The extended finite-element model (XFEM) handles this jump discontinuity by using discontinuous-derivative enrichment formulation, eliminating the requirement of using front conforming mesh and re-meshing after each time step as in the conventional finite-element method. However, prior interface location is required so as to solve the governing equations for concentration field for which a numerical technique, the level set method, is used for tracking the interface explicitly and updating it over time. The level set method is chosen as it is independent of shape and location of the interface. Thus, a combined XFEM and level set method is developed in this paper. Numerical analysis for pitting corrosion of stainless steel 304 is presented. The above proposed model is validated by comparing the numerical results with experimental results, exact solutions and some other approximate solutions. An empirical model for pitting potential is also derived based on the finite-element results. Studies show that pitting profile depends on factors such as ion concentration, solution pH and temperature to a large extent. Studying the individual and combined effects of these factors on pitting potential is worth knowing, as pitting potential directly influences corrosion rate.
On Precoding for Constant K-User MIMO Gaussian Interference Channel With Finite Constellation Inputs
Resumo:
This paper considers linear precoding for the constant channel-coefficient K-user MIMO Gaussian interference channel (MIMO GIC) where each transmitter-i (Tx-i) requires the sending of d(i) independent complex symbols per channel use that take values from fixed finite constellations with uniform distribution to receiver-i (Rx-i) for i = 1, 2, ..., K. We define the maximum rate achieved by Tx-i using any linear precoder as the signal-to-noise ratio (SNR) tends to infinity when the interference channel coefficients are zero to be the constellation constrained saturation capacity (CCSC) for Tx-i. We derive a high-SNR approximation for the rate achieved by Tx-i when interference is treated as noise and this rate is given by the mutual information between Tx-i and Rx-i, denoted as I(X) under bar (i); (Y) under bar (i)]. A set of necessary and sufficient conditions on the precoders under which I(X) under bar (i); (Y) under bar (i)] tends to CCSC for Tx-i is derived. Interestingly, the precoders designed for interference alignment (IA) satisfy these necessary and sufficient conditions. Furthermore, we propose gradient-ascentbased algorithms to optimize the sum rate achieved by precoding with finite constellation inputs and treating interference as noise. A simulation study using the proposed algorithms for a three-user MIMO GIC with two antennas at each node with d(i) = 1 for all i and with BPSK and QPSK inputs shows more than 0.1-b/s/Hz gain in the ergodic sum rate over that yielded by precoders obtained from some known IA algorithms at moderate SNRs.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
Facial emotions are the most expressive way to display emotions. Many algorithms have been proposed which employ a particular set of people (usually a database) to both train and test their model. This paper focuses on the challenging task of database independent emotion recognition, which is a generalized case of subject-independent emotion recognition. The emotion recognition system employed in this work is a Meta-Cognitive Neuro-Fuzzy Inference System (McFIS). McFIS has two components, a neuro-fuzzy inference system, which is the cognitive component and a self-regulatory learning mechanism, which is the meta-cognitive component. The meta-cognitive component, monitors the knowledge in the neuro-fuzzy inference system and decides on what-to-learn, when-to-learn and how-to-learn the training samples, efficiently. For each sample, the McFIS decides whether to delete the sample without being learnt, use it to add/prune or update the network parameter or reserve it for future use. This helps the network avoid over-training and as a result improve its generalization performance over untrained databases. In this study, we extract pixel based emotion features from well-known (Japanese Female Facial Expression) JAFFE and (Taiwanese Female Expression Image) TFEID database. Two sets of experiment are conducted. First, we study the individual performance of both databases on McFIS based on 5-fold cross validation study. Next, in order to study the generalization performance, McFIS trained on JAFFE database is tested on TFEID and vice-versa. The performance The performance comparison in both experiments against SVNI classifier gives promising results.
Resumo:
In this paper, we consider spatial modulation (SM) operating in a frequency-selective single-carrier (SC) communication scenario and propose zero-padding instead of the cyclic-prefix considered in the existing literature. We show that the zero-padded single-carrier (ZP-SC) SM system offers full multipath diversity under maximum-likelihood (ML) detection, unlike the cyclic-prefix based SM system. Furthermore, we show that the order of ML detection complexity in our proposed ZP-SC SM system is independent of the frame length and depends only on the number of multipath links between the transmitter and the receiver. Thus, we show that the zero-padding applied in the SC SM system has two advantages over the cyclic prefix: 1) achieves full multipath diversity, and 2) imposes a relatively low ML detection complexity. Furthermore, we extend the partial interference cancellation receiver (PIC-R) proposed by Guo and Xia for the detection of space-time block codes (STBCs) in order to convert the ZP-SC system into a set of narrowband subsystems experiencing flat-fading. We show that full rank STBC transmissions over these subsystems achieves full transmit, receive as well as multipath diversity for the PIC-R. Furthermore, we show that the ZP-SC SM system achieves receive and multipath diversity for the PIC-R at a detection complexity order which is the same as that of the SM system in flat-fading scenario. Our simulation results demonstrate that the symbol error ratio performance of the proposed linear receiver for the ZP-SC SM system is significantly better than that of the SM in cyclic prefix based orthogonal frequency division multiplexing as well as of the SM in the cyclic-prefixed and zero-padded single carrier systems relying on zero-forcing/minimum mean-squared error equalizer based receivers.
Resumo:
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is similar to 200-fold faster (for large dataset) when compared to existing CPU based systems. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
Homogeneous temperature regions are necessary for use in hydrometeorological studies. The regions are often delineated by analysing statistics derived from time series of maximum, minimum or mean temperature, rather than attributes influencing temperature. This practice cannot yield meaningful regions in data-sparse areas. Further, independent validation of the delineated regions for homogeneity in temperature is not possible, as temperature records form the basis to arrive at the regions. To address these issues, a two-stage clustering approach is proposed in this study to delineate homogeneous temperature regions. First stage of the approach involves (1) determining correlation structure between observed temperature over the study area and possible predictors (large-scale atmospheric variables) influencing the temperature and (2) using the correlation structure as the basis to delineate sites in the study area into clusters. Second stage of the approach involves analysis on each of the clusters to (1) identify potential predictors (large-scale atmospheric variables) influencing temperature at sites in the cluster and (2) partition the cluster into homogeneous fuzzy temperature regions using the identified potential predictors. Application of the proposed approach to India yielded 28 homogeneous regions that were demonstrated to be effective when compared to an alternate set of 6 regions that were previously delineated over the study area. Intersite cross-correlations of monthly maximum and minimum temperatures in the existing regions were found to be weak and negative for several months, which is undesirable. This problem was not found in the case of regions delineated using the proposed approach. Utility of the proposed regions in arriving at estimates of potential evapotranspiration for ungauged locations in the study area is demonstrated.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?
We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.
Resumo:
This thesis consists of two independent chapters. The first chapter deals with universal algebra. It is shown, in von Neumann-Bernays-Gӧdel set theory, that free images of partial algebras exist in arbitrary varieties. It follows from this, as set-complete Boolean algebras form a variety, that there exist free set-complete Boolean algebras on any class of generators. This appears to contradict a well-known result of A. Hales and H. Gaifman, stating that there is no complete Boolean algebra on any infinite set of generators. However, it does not, as the algebras constructed in this chapter are allowed to be proper classes. The second chapter deals with positive elementary inductions. It is shown that, in any reasonable structure ᶆ, the inductive closure ordinal of ᶆ is admissible, by showing it is equal to an ordinal measuring the saturation of ᶆ. This is also used to show that non-recursively saturated models of the theories ACF, RCF, and DCF have inductive closure ordinals greater than ω.
Resumo:
An efficient one-step digit-set-restricted modified signed-digit (MSD) adder based on symbolic substitution is presented. In this technique, carry propagation is avoided by introducing reference digits to restrict the intermediate carry and sum digits to {1,0} and {0,1}, respectively. The proposed technique requires significantly fewer minterms and simplifies system complexity compared to the reported one-step MSD addition techniques. An incoherent correlator based on an optoelectronic shared content-addressable memory processor is suggested to perform the addition operation. In this technique, only one set of minterms needs to be stored, independent of the operand length. (C) 2002 society or Photo-Optical Instrumentation Engineers.
Resumo:
We study the change in the degree of coherence of partially coherent electromagnetic beam (so called electromagnetic Gaussian Schell-model beam). It is shown analytically that with a fixed set of source parameters and under a particular atmospheric turbulence model, an electromagnetic Gaussian Schell-model beam propagating through atmospheric turbulence reaches its maximum value of coherence after the beam propagates a particular distance, and the effective width of the spectral degree of coherence also has its maximum value. This phenomenon is independent of the used turbulence model. The results are illustrated by numerical curves. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.
The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.
The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.
The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.
If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.
Resumo:
O presente avalia a qualidade da remoção de tecido pulpar após o preparo químico-cirúrgico realizado com a técnica da lima única, descrita por Ghassan Yared. Ainda não há publicado pesquisa sobre os resultados desta técnica. Este estudo compara o percentual de tecido pulpar remanescente em canais radiculares ovais e circulares de incisivos inferiores recém-extraídos que possuíssem polpa viva e armazenados em formol a 10%. Foram comparadas duas técnicas: ProTaper Universal e a técnica da lima única F2. Após uma rigorosa seleção, quarenta e oito dentes com polpa viva que possuíam indicação de extração, foram preparados, classificados em canais ovais e circulares, separados aleatoriamente em 4 grupos e instrumentados com as duas técnicas. O grupo controle, com 12 espécimes, não recebeu nenhum tipo de intervenção. G1 (n=12), canais ovais, instrumentados com a técnica ProTaper Universal; G2 (n=12), canais ovais instrumentados com a técnica da lima única F2; G3 (n=12), canais circulares instrumentados com a técnica ProTaper Universal; G4 (n=12), canais circulares instrumentados com a técnica da lima única F2. Então, seções transversais foram preparadas para avaliação histológica. A análise da quantidade de tecido pulpar remanescente foi avaliada digitalmente. A análise preliminar dos dados brutos em conjunto de todos os grupos experimentais revelou um padrão de distribuição normal por meio do uso do teste Kolmogorov-Smirnov. A análise foi realizada, e os dados brutos foram avaliados através de métodos não-paramétricos: Teste H Kruskal-Wallis. O valor percentual mínimo de tecido remanescente foi de 0% e o máximo de 37,78% entre todos os grupos. Os valores relativos a quantidade de tecido pulpar remanescente variaram entre 0 a 43.47% m2. Os resultados do Teste H Kruskal-Wallis não revelaram diferenças entre as seções mais apicais (p > 0.05). Entretanto, foi encontrada diferença significante entre as seções mais apicais e a seção do terço médio (p < 0.05). Também foram encontradas diferenças significantes quando canais circulares foram comparados com canais ovais independente da técnica de instrumentação utilizada (p < 0.05). Porém, entre as duas técnicas de instrumentação estudadas, tanto nos canais ovais quanto para os os canais circulares, não houve diferença estatística significante (p > 0.05). A proposta deste estudo é a de fazer uma reflexão sobre a real necessidade de um grande número de instrumentos para o total preparo de canais radiculares, uma vez que nenhuma das técnicas foi capaz de debridar por completo o espaço do canal radicular.
Resumo:
Seasonal trawling was conducted randomly in coastal (depths of 4.6–17 m) waters from St. Augustine, Florida, (29.9°N) to Winyah Bay, South Carolina (33.1°N), during 2000–03, 2008–09, and 2011 to assess annual trends in the relative abundance of sea turtles. A total of 1262 loggerhead sea turtles (Caretta caretta) were captured in 23% (951) of 4207 sampling events. Capture rates (overall and among prevalent 5-cm size classes) were analyzed through the use of a generalized linear model with log link function for the 4097 events that had complete observations for all 25 model parameters. Final models explained 6.6% (70.1–75.0 cm minimum straight-line carapace length [SCLmin]) to 14.9% (75.1–80.0 cm SCLmin) of deviance in the data set. Sampling year, geographic subregion, and distance from shore were retained as significant terms in all final models, and these terms collectively accounted for 6.2% of overall model deviance (range: 4.5–11.7% of variance among 5-cm size classes). We retained 18 parameters only in a subset of final models: 4 as exclusively significant terms, 5 as a mixture of significant or nonsignificant terms, and 9 as exclusively nonsignificant terms. Four parameters also were dropped completely from all final models. The generalized linear model proved appropriate for monitoring trends for this data set that was laden with zero values for catches and was compiled for a globally protected species. Because we could not account for much model deviance, metrics other than those examined in our study may better explain catch variability and, once elucidated, their inclusion in the generalized linear model should improve model fits.