937 resultados para Optimal matching analysis.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new relative measure of signal complexity, referred to here as relative structural complexity, which is based on the matching pursuit (MP) decomposition. By relative, we refer to the fact that this new measure is highly dependent on the decomposition dictionary used by MP. The structural part of the definition points to the fact that this new measure is related to the structure, or composition, of the signal under analysis. After a formal definition, the proposed relative structural complexity measure is used in the analysis of newborn EEG. To do this, firstly, a time-frequency (TF) decomposition dictionary is specifically designed to compactly represent the newborn EEG seizure state using MP. We then show, through the analysis of synthetic and real newborn EEG data, that the relative structural complexity measure can indicate changes in EEG structure as it transitions between the two EEG states; namely seizure and background (non-seizure).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: To compare the outcome of balloon PTCA with final coronary stenosis diameter (SD) <=30%, with elective coronary stenting. METHODS: We performed a comparative analysis of the 6 month outcomes in patients treated with primary stenting and those who obtained an optimal balloon PTCA result treated during the first 12 hours of AMI onset included in the STENT PAMI randomized trial. RESULTS: The results were analysed into 3 groups: primary stenting (441 patients, SD=22±6%), optimal PTCA (245 patients), and nonoptimal PTCA (182 patients, SD= 37±5%). At the end of the 6 months primary stent group presented with the lowest restenosis(23 vs. 31 vs. 45%, p=0.001, respectively). Ischemia-driven target vessel revascularization rate (TVR) (7 vs. 15.5 vs. 19%, p=0.001, respectively). CONCLUSION: At the 6 month follow-up, primary stenting offered the lowest restenosis and ischemia-driven TVR rates. Compared to optimal balloon PTCA. Nonoptimal primary balloon PTCA pts (SD=31-50%), had the worst late angiographic outcomes and should be treated more actively with coronary stent implantation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Mathematik, Diss., 2010

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Scheduling, job shop, uncertainty, mixed (disjunctive) graph, stability analysis

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Neurally adjusted ventilatory assist (NAVA) is a ventilation assist mode that delivers pressure in proportionality to electrical activity of the diaphragm (Eadi). Compared to pressure support ventilation (PS), it improves patient-ventilator synchrony and should allow a better expression of patient's intrinsic respiratory variability. We hypothesize that NAVA provides better matching in ventilator tidal volume (Vt) to patients inspiratory demand. 22 patients with acute respiratory failure, ventilated with PS were included in the study. A comparative study was carried out between PS and NAVA, with NAVA gain ensuring the same peak airway pressure as PS. Robust coefficients of variation (CVR) for Eadi and Vt were compared for each mode. The integral of Eadi (ʃEadi) was used to represent patient's inspiratory demand. To evaluate tidal volume and patient's demand matching, Range90 = 5-95 % range of the Vt/ʃEadi ratio was calculated, to normalize and compare differences in demand within and between patients and modes. In this study, peak Eadi and ʃEadi are correlated with median correlation of coefficients, R > 0.95. Median ʃEadi, Vt, neural inspiratory time (Ti_ ( Neural )), inspiratory time (Ti) and peak inspiratory pressure (PIP) were similar in PS and NAVA. However, it was found that individual patients have higher or smaller ʃEadi, Vt, Ti_ ( Neural ), Ti and PIP. CVR analysis showed greater Vt variability for NAVA (p < 0.005). Range90 was lower for NAVA than PS for 21 of 22 patients. NAVA provided better matching of Vt to ʃEadi for 21 of 22 patients, and provided greater variability Vt. These results were achieved regardless of differences in ventilatory demand (Eadi) between patients and modes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Accurate diagnosis of orthopedic device-associated infections can be challenging. Culture of tissue biopsy specimens is often considered the gold standard; however, there is currently no consensus on the ideal incubation time for specimens. The aim of our study was to assess the yield of a 14-day incubation protocol for tissue biopsy specimens from revision surgery (joint replacements and internal fixation devices) in a general orthopedic and trauma surgery setting. Medical records were reviewed retrospectively in order to identify cases of infection according to predefined diagnostic criteria. From August 2009 to March 2012, 499 tissue biopsy specimens were sampled from 117 cases. In 70 cases (59.8%), at least one sample showed microbiological growth. Among them, 58 cases (82.9%) were considered infections and 12 cases (17.1%) were classified as contaminations. The median time to positivity in the cases of infection was 1 day (range, 1 to 10 days), compared to 6 days (range, 1 to 11 days) in the cases of contamination (P < 0.001). Fifty-six (96.6%) of the infection cases were diagnosed within 7 days of incubation. In conclusion, the results of our study show that the incubation of tissue biopsy specimens beyond 7 days is not productive in a general orthopedic and trauma surgery setting. Prolonged 14-day incubation might be of interest in particular situations, however, in which the prevalence of slow-growing microorganisms and anaerobes is higher.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The stop-loss reinsurance is one of the most important reinsurance contracts in the insurance market. From the insurer point of view, it presents an interesting property: it is optimal if the criterion of minimizing the variance of the cost of the insurer is used. The aim of the paper is to contribute to the analysis of the stop-loss contract in one period from the point of view of the insurer and the reinsurer. Firstly, the influence of the parameters of the reinsurance contract on the correlation coefficient between the cost of the insurer and the cost of the reinsurer is studied. Secondly, the optimal stop-loss contract is obtained if the criterion used is the maximization of the joint survival probability of the insurer and the reinsurer in one period.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Recently, it has been suggested that the type of stent used in primary percutaneous coronary interventions (pPCI) might impact upon the outcomes of patients with acute myocardial infarction (AMI). Indeed, drug-eluting stents (DES) reduce neointimal hyperplasia compared to bare-metal stents (BMS). Moreover, the later generation DES, due to its biocompatible polymer coatings and stent design, allows for greater deliverability, improved endothelial healing and therefore less restenosis and thrombus generation. However, data on the safety and performance of DES in large cohorts of AMI is still limited. AIM: To compare the early outcome of DES vs. BMS in AMI patients. METHODS: This was a prospective, multicentre analysis containing patients from 64 hospitals in Switzerland with AMI undergoing pPCI between 2005 and 2013. The primary endpoint was in-hospital all-cause death, whereas the secondary endpoint included a composite measure of major adverse cardiac and cerebrovascular events (MACCE) of death, reinfarction, and cerebrovascular event. RESULTS: Of 20,464 patients with a primary diagnosis of AMI and enrolled to the AMIS Plus registry, 15,026 were referred for pPCI and 13,442 received stent implantation. 10,094 patients were implanted with DES and 2,260 with BMS. The overall in-hospital mortality was significantly lower in patients with DES compared to those with BMS implantation (2.6% vs. 7.1%,p < 0.001). The overall in-hospital MACCE after DES was similarly lower compared to BMS (3.5% vs. 7.6%, p < 0.001). After adjusting for all confounding covariables, DES remained an independent predictor for lower in-hospital mortality (OR 0.51,95% CI 0.40-0.67, p < 0.001). Since groups differed as regards to baseline characteristics and pharmacological treatment, we performed a propensity score matching (PSM) to limit potential biases. Even after the PSM, DES implantation remained independently associated with a reduced risk of in-hospital mortality (adjusted OR 0.54, 95% CI 0.39-0.76, p < 0.001). CONCLUSIONS: In unselected patients from a nationwide, real-world cohort, we found DES, compared to BMS, was associated with lower in-hospital mortality and MACCE. The identification of optimal treatment strategies of patients with AMI needs further randomised evaluation; however, our findings suggest a potential benefit with DES.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Polyphenols may lower the risk of cardiovascular disease (CVD) and other chronic diseases due to their antioxidant and anti-inflammatory properties, as well as their beneficial effects on blood pressure, lipids and insulin resistance. However, no previous epidemiological studies have evaluated the relationship between the intake of total polyphenols intake and polyphenol subclasses with overall mortality. Our aim was to evaluate whether polyphenol intake is associated with all-cause mortality in subjects at high cardiovascular risk. Methods: We used data from the PREDIMED study, a 7,447-participant, parallel-group, randomized, multicenter, controlled five-year feeding trial aimed at assessing the effects of the Mediterranean Diet in primary prevention of cardiovascular disease. Polyphenol intake was calculated by matching food consumption data from repeated food frequency questionnaires (FFQ) with the Phenol-Explorer database on the polyphenol content of each reported food. Hazard ratios (HR) and 95% confidence intervals (CI) between polyphenol intake and mortality were estimated using time-dependent Cox proportional hazard models. Results: Over an average of 4.8 years of follow-up, we observed 327 deaths. After multivariate adjustment, we found a 37% relative reduction in all-cause mortality comparing the highest versus the lowest quintiles of total polyphenol intake (hazard ratio (HR) = 0.63; 95% CI 0.41 to 0.97; P for trend = 0.12). Among the polyphenol subclasses, stilbenes and lignans were significantly associated with reduced all-cause mortality (HR =0.48; 95% CI 0.25 to 0.91; P for trend = 0.04 and HR = 0.60; 95% CI 0.37 to 0.97; P for trend = 0.03, respectively), with no significant associations apparent in the rest (flavonoids or phenolic acids). Conclusions: Among high-risk subjects, those who reported a high polyphenol intake, especially of stilbenes and lignans, showed a reduced risk of overall mortality compared to those with lower intakes. These results may be useful to determine optimal polyphenol intake or specific food sources of polyphenols that may reduce the risk of all-cause mortality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using the case of an economically declined neighbourhood in the post-industrial German Ruhr Area (sometimes characterized as Germany’s “Rust Belt”), we analyse, describe and conclude how urban agriculture can be used as a catalyst to stimulate and support urban renewal and regeneration, especially from a socio-cultural perspective. Using the methodological framework of participatory action research, and linking bottom-up and top-down planning approaches, a project path was developed to include the population affected and foster individual responsibility for their district, as well as to strengthen inhabitants and stakeholder groups in a permanent collective stewardship for the individual forms of urban agriculture developed and implemented. On a more abstract level, the research carried out can be characterized as a form of action research with an intended transgression of the boundaries between research, planning, design, and implementation. We conclude that by synchronously combining those four domains with intense feedback loops, synergies for the academic knowledge on the potential performance of urban agriculture in terms of sustainable development, as well as the benefits for the case-study area and the interests of individual urban gardeners can be achieved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The frequency responses of two 50 Hz and one 400 Hz induction machines have been measured experimentally over a frequency range of 1 kHz to 400 kHz. This study has shown that the stator impedances of the machines behave in a similar manner to a parallel resonant circuit, and hence have a resonant point at which the Input impedance of the machine is at a maximum. This maximum impedance point was found experimentally to be as low as 33 kHz, which is well within the switching frequency ranges of modern inverter drives. This paper investigates the possibility of exploiting the maximum impedance point of the machine, by taking it into consideration when designing an inverter, in order to minimize ripple currents due to the switching frequency. Minimization of the ripple currents would reduce torque pulsation and losses, increasing overall performance. A modified machine model was developed to take into account the resonant point, and this model was then simulated with an inverter to demonstrate the possible advantages of matching the inverter switching frequency to the resonant point. Finally, in order to experimentally verify the simulated results, a real inverter with a variable switching frequency was used to drive an induction machine. Experimental results are presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

J.A. Ferreira Neto, E.C. Santos Junior, U. Fra Paleo, D. Miranda Barros, and M.C.O. Moreira. 2011. Optimal subdivision of land in agrarian reform projects: an analysis using genetic algorithms. Cien. Inv. Agr. 38(2): 169-178. The objective of this manuscript is to develop a new procedure to achieve optimal land subdivision using genetic algorithms (GA). The genetic algorithm was tested in the rural settlement of Veredas, located in Minas Gerais, Brazil. This implementation was based on the land aptitude and its productivity index. The sequence of tests in the study was carried out in two areas with eight different agricultural aptitude classes, including one area of 391.88 ha subdivided into 12 lots and another of 404.1763 ha subdivided into 14 lots. The effectiveness of the method was measured using the shunting line standard value of a parceled area lot`s productivity index. To evaluate each parameter, a sequence of 15 calculations was performed to record the best individual fitness average (MMI) found for each parameter variation. The best parameter combination found in testing and used to generate the new parceling with the GA was the following: 320 as the generation number, a population of 40 individuals, 0.8 mutation tax, and a 0.3 renewal tax. The solution generated rather homogeneous lots in terms of productive capacity.