884 resultados para Matrix geometric


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Tässä päättötyössä annetaan kuvaus kehitetystä sovelluksesta Quasi Birth Death processien ratkaisuun. Tämä ohjelma on tähän mennessä ainutlaatuinen ja sen avulla voi ratkaista sarjan tehtäviä ja sitä tarvitaan kommunikaatio systeemien analyysiin. Mainittuun sovellukseen on annettu kuvaus ja määritelmä. Lyhyt kuvaus toisesta sovelluksesta Quasi Birth Death prosessien tehtävien ratkaisuun on myös annettu

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Muokatun matriisi-geometrian tekniikan kehitys yleimmäksi jonoksi on esitelty tässä työssä. Jonotus systeemi koostuu useista jonoista joilla on rajatut kapasiteetit. Tässä työssä on myös tutkittu PH-tyypin jakautumista kun ne jaetaan. Rakenne joka vastaa lopullista Markovin ketjua jossa on itsenäisiä matriiseja joilla on QBD rakenne. Myös eräitä rajallisia olotiloja on käsitelty tässä työssä. Sen esitteleminen matriisi-geometrisessä muodossa, muokkaamalla matriisi-geometristä ratkaisua on tämän opinnäytetyön tulos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study is about the analysis of some queueing models related to N-policy.The optimal value the queue size has to attain in order to turn on a single server, assuming that the policy is to turn on a single server when the queue size reaches a certain number, N, and turn him off when the system is empty.The operating policy is the usual N-policy, but with random N and in model 2, a system similar to the one described here.This study analyses “ Tandem queue with two servers”.Here assume that the first server is a specialized one.In a queueing system,under N-policy ,the server will be on vacation until N units accumulate for the first time after becoming idle.A modified version of the N-policy for an M│M│1 queueing system is considered here.The novel feature of this model is that a busy service unit prevents the access of new customers to servers further down the line.It is deals with a queueing model consisting of two servers connected in series with a finite intermediate waiting room of capacity k.Here assume that server I is a specialized server.For this model ,the steady state probability vector and the stability condition are obtained using matrixgeometric method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we study the effect of rest periods in queueing systems without exhaustive service and inventory systems with rest to the server. Most of the works in the vacation models deal with exhaustive service. Recently some results have appeared for the systems without exhaustive service.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis the queueing-inventory models considered are analyzed as continuous time Markov chains in which we use the tools such as matrix analytic methods. We obtain the steady-state distributions of various queueing-inventory models in product form under the assumption that no customer joins the system when the inventory level is zero. This is despite the strong correlation between the number of customers joining the system and the inventory level during lead time. The resulting quasi-birth-anddeath (QBD) processes are solved explicitly by matrix geometric methods

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work studies the turbo decoding of Reed-Solomon codes in QAM modulation schemes for additive white Gaussian noise channels (AWGN) by using a geometric approach. Considering the relations between the Galois field elements of the Reed-Solomon code and the symbols combined with their geometric dispositions in the QAM constellation, a turbo decoding algorithm, based on the work of Chase and Pyndiah, is developed. Simulation results show that the performance achieved is similar to the one obtained with the pragmatic approach with binary decomposition and analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. The objective of this study was to determine the expression of matrix metalloproteinase-9 (MMP-9) in apical periodontitis lesions. Study design. Nineteen epithelialized and 18 nonepithelialized apical periodontitis lesions were collected after periapical surgery. After histological processing, serial sectioning, H&E staining, and microscopic analysis, 10 epithelialized and 10 nonepithelialized lesions were selected for immunohistochemical analysis for MMP-9 and CD 68. At least one third of each specimen collected was frozen at -70 degrees C for further mRNA isolation and reverse transcription into cDNA for real-time-PCR procedures. Geometric averaging of multiple housekeeping genes normalized MMP-9 mRNA expression level. Results. Polymorphonuclear neutrophils, macrophages and lymphocytes presented MMP-9 positive immunostaining in both types of lesions. When present, epithelial cells were also stained. The number and the ratio of MMP-9(+)/total cells were greater in nonepithelialized than epithelialized lesions (P = .0001) presenting a positive correlation to CD68(+)/total cells (P = .045). Both types of lesions presented increased MMP-9 expression (P < .0001) when compared to healthy periapical ligaments. However, no significant differences were observed for MMP-9 mRNA expression between ephithelized and nonephithelized lesions. Conclusion. The present data suggest the participation of several inflammatory cells, mainly CD68(+) cells, in the MMP-9 expression in apical periodontitis lesions. MMP-9 could be actively enrolled in the extracellular matrix degradation in apical periodontitis lesions. (Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2009; 107: 127-132)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health assessment and medical surveillance of workers exposed to combustion nanoparticles are challenging. The aim was to evaluate the feasibility of using exhaled breath condensate (EBC) from healthy volunteers for (1) assessing the lung deposited dose of combustion nanoparticles and (2) determining the resulting oxidative stress by measuring hydrogen peroxide (H2O2) and malondialdehyde (MDA). Methods: Fifteen healthy nonsmoker volunteers were exposed to three different levels of sidestream cigarette smoke under controlled conditions. EBC was repeatedly collected before, during, and 1 and 2 hr after exposure. Exposure variables were measured by direct reading instruments and by active sampling. The different EBC samples were analyzed for particle number concentration (light-scattering-based method) and for selected compounds considered oxidative stress markers. Results: Subjects were exposed to an average airborne concentration up to 4.3×10(5) particles/cm(3) (average geometric size ∼60-80 nm). Up to 10×10(8) particles/mL could be measured in the collected EBC with a broad size distribution (50(th) percentile ∼160 nm), but these biological concentrations were not related to the exposure level of cigarette smoke particles. Although H2O2 and MDA concentrations in EBC increased during exposure, only H2O2 showed a transient normalization 1 hr after exposure and increased afterward. In contrast, MDA levels stayed elevated during the 2 hr post exposure. Conclusions: The use of diffusion light scattering for particle counting proved to be sufficiently sensitive to detect objects in EBC, but lacked the specificity for carbonaceous tobacco smoke particles. Our results suggest two phases of oxidation markers in EBC: first, the initial deposition of particles and gases in the lung lining liquid, and later the start of oxidative stress with associated cell membrane damage. Future studies should extend the follow-up time and should remove gases or particles from the air to allow differentiation between the different sources of H2O2 and MDA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future of high technology welded constructions will be characterised by higher strength materials and improved weld quality with respect to fatigue resistance. The expected implementation of high quality high strength steel welds will require that more attention be given to the issues of crack initiation and mechanical mismatching. Experiments and finite element analyses were performed within the framework of continuum damage mechanics to investigate the effect of mismatching of welded joints on void nucleation and coalescence during monotonic loading. It was found that the damage of undermatched joints mainly occurred in the sandwich layer and the damageresistance of the joints decreases with the decrease of the sandwich layer width. The damage of over-matched joints mainly occurred in the base metal adjacent to the sandwich layer and the damage resistance of the joints increases with thedecrease of the sandwich layer width. The mechanisms of the initiation of the micro voids/cracks were found to be cracking of the inclusions or the embrittled second phase, and the debonding of the inclusions from the matrix. Experimental fatigue crack growth rate testing showed that the fatigue life of under-matched central crack panel specimens is longer than that of over-matched and even-matched specimens. Further investigation by the elastic-plastic finite element analysis indicated that fatigue crack closure, which originated from the inhomogeneousyielding adjacent to the crack tip, played an important role in the fatigue crack propagation. The applicability of the J integral concept to the mismatched specimens with crack extension under cyclic loading was assessed. The concept of fatigue class used by the International Institute of Welding was introduced in the parametric numerical analysis of several welded joints. The effect of weld geometry and load condition on fatigue strength of ferrite-pearlite steel joints was systematically evaluated based on linear elastic fracture mechanics. Joint types included lap joints, angle joints and butt joints. Various combinations of the tensile and bending loads were considered during the evaluation with the emphasis focused on the existence of both root and toe cracks. For a lap joint with asmall lack-of-penetration, a reasonably large weld leg and smaller flank angle were recommended for engineering practice in order to achieve higher fatigue strength. It was found that the fatigue strength of the angle joint depended strongly on the location and orientation of the preexisting crack-like welding defects, even if the joint was welded with full penetration. It is commonly believed that the double sided butt welds can have significantly higher fatigue strength than that of a single sided welds, but fatigue crack initiation and propagation can originate from the weld root if the welding procedure results in a partial penetration. It is clearly shown that the fatigue strength of the butt joint could be improved remarkably by ensuring full penetration. Nevertheless, increasing the fatigue strength of a butt joint by increasing the size of the weld is an uneconomical alternative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cryptosystem using linear codes was developed in 1978 by Mc-Eliece. Later in 1985 Niederreiter and others developed a modified version of cryptosystem using concepts of linear codes. But these systems were not used frequently because of its larger key size. In this study we were designing a cryptosystem using the concepts of algebraic geometric codes with smaller key size. Error detection and correction can be done efficiently by simple decoding methods using the cryptosystem developed. Approach: Algebraic geometric codes are codes, generated using curves. The cryptosystem use basic concepts of elliptic curves cryptography and generator matrix. Decrypted information takes the form of a repetition code. Due to this complexity of decoding procedure is reduced. Error detection and correction can be carried out efficiently by solving a simple system of linear equations, there by imposing the concepts of security along with error detection and correction. Results: Implementation of the algorithm is done on MATLAB and comparative analysis is also done on various parameters of the system. Attacks are common to all cryptosystems. But by securely choosing curve, field and representation of elements in field, we can overcome the attacks and a stable system can be generated. Conclusion: The algorithm defined here protects the information from an intruder and also from the error in communication channel by efficient error correction methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuation methods have been shown as efficient tools for solving ill-conditioned cases, with close to singular Jacobian matrices, such as the maximum loading point of power systems. Some parameterization techniques have been proposed to avoid matrix singularity and successfully solve those cases. This paper presents a new geometric parameterization scheme that allows the complete tracing of the P-V curves without ill-conditioning problems. The proposed technique associates robustness to simplicity and, it is of easy understanding. The Jacobian matrix singularity is avoided by the addition of a line equation, which passes through a point in the plane determined by the total real power losses and loading factor. These two parameters have clear physical meaning. The application of this new technique to the IEEE systems (14, 30, 57, 118 and 300 buses) shows that the best characteristics of the conventional Newton's method are not only preserved but also improved. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we introduce a three-parameter extension of the bivariate exponential-geometric (BEG) law (Kozubowski and Panorska, 2005) [4]. We refer to this new distribution as the bivariate gamma-geometric (BGG) law. A bivariate random vector (X, N) follows the BGG law if N has geometric distribution and X may be represented (in law) as a sum of N independent and identically distributed gamma variables, where these variables are independent of N. Statistical properties such as moment generation and characteristic functions, moments and a variance-covariance matrix are provided. The marginal and conditional laws are also studied. We show that BBG distribution is infinitely divisible, just as the BEG model is. Further, we provide alternative representations for the BGG distribution and show that it enjoys a geometric stability property. Maximum likelihood estimation and inference are discussed and a reparametrization is proposed in order to obtain orthogonality of the parameters. We present an application to a real data set where our model provides a better fit than the BEG model. Our bivariate distribution induces a bivariate Levy process with correlated gamma and negative binomial processes, which extends the bivariate Levy motion proposed by Kozubowski et al. (2008) [6]. The marginals of our Levy motion are a mixture of gamma and negative binomial processes and we named it BMixGNB motion. Basic properties such as stochastic self-similarity and the covariance matrix of the process are presented. The bivariate distribution at fixed time of our BMixGNB process is also studied and some results are derived, including a discussion about maximum likelihood estimation and inference. (C) 2012 Elsevier Inc. All rights reserved.