984 resultados para non-orthogonal sparsifying transform


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, an algorithm to compute the envelope of non-destructive testing (NDT) signals is proposed. This method allows increasing the speed and reducing the memory in extensive data processing. Also, this procedure presents advantage of preserving the data information for physical modeling applications of time-dependent measurements. The algorithm is conceived to be applied for analyze data from non-destructive testing. The comparison between different envelope methods and the proposed method, applied to Magnetic Bark Signal (MBN), is studied. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approximate analytical technique employing a finite integral transform is developed to solve the reaction diffusion problem with Michaelis-Menten kinetics in a solid of general shape. A simple infinite series solution for the substrate concentration is obtained as a function of the Thiele modulus, modified Sherwood number, and Michaelis constant. An iteration scheme is developed to bring the approximate solution closer to the exact solution. Comparison with the known exact solutions for slab geometry (quadrature) and numerically exact solutions for spherical geometry (orthogonal collocation) shows excellent agreement for all values of the Thiele modulus and Michaelis constant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive analytical solutions for the three-dimensional time-dependent buckling of a non-Newtonian viscous plate in a less viscous medium. For the plate we assume a power-law rheology. The principal, axes of the stretching D-ij in the homogeneously deformed ground state are parallel and orthogonal to the bounding surfaces of the plate in the flat state. In the model formulation the action of the less viscous medium is replaced by equivalent reaction forces. The reaction forces are assumed to be parallel to the normal vector of the deformed plate surfaces. As a consequence, the buckling process is driven by the differences between the in-plane stresses and out of plane stress, and not by the in-plane stresses alone as assumed in previous models. The governing differential equation is essentially an orthotropic plate equation for rate dependent material, under biaxial pre-stress, supported by a viscous medium. The differential problem is solved by means of Fourier transformation and largest growth coefficients and corresponding wavenumbers are evaluated. We discuss in detail fold evolutions for isotropic in-plane stretching (D-11 = D-22), uniaxial plane straining (D-22 = 0) and in-plane flattening (D-11 = -2D(22)). Three-dimensional plots illustrate the stages of fold evolution for random initial perturbations or initial embryonic folds with axes non-parallel to the maximum compression axis. For all situations, one dominant set of folds develops normal to D-11, although the dominant wavelength differs from the Biot dominant wavelength except when the plate has a purely Newtonian viscosity. However, in the direction parallel to D-22, there exist infinitely many modes in the vicinity of the dominant wavelength which grow only marginally slower than the one corresponding to the dominant wavelength. This means that, except for very special initial conditions, the appearance of a three-dimensional fold will always be governed by at least two wavelengths. The wavelength in the direction parallel to D-11 is the dominant wavelength, and the wavelength(s) in the direction parallel to D-22 is determined essentially by the statistics of the initial state. A comparable sensitivity to the initial geometry does not exist in the classic two-dimensional folding models. In conformity with tradition we have applied Kirchhoff's hypothesis to constrain the cross-sectional rotations of the plate. We investigate the validity of this hypothesis within the framework of Reissner's plate theory. We also include a discussion of the effects of adding elasticity into the constitutive relations and show that there exist critical ratios of the relaxation times of the plate and the embedding medium for which two dominant wavelengths develop, one at ca. 2.5 of the classical Biot dominant wavelength and the other at ca. 0.45 of this wavelength. We propose that herein lies the origin of parasitic folds well known in natural examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective The purpose of this study was to evaluate the retention force of an O-ring attachment system in different inclinations to the ideal path of insertion, using devices to compensate angulations. Material and methods Two implants were inserted into an aluminum base, and ball attachments were screwed to implants. Cylinders with O-rings were placed on ball attachments and connected to the test device using positioners to compensate implant angulations (0 degrees, 7 degrees, and 14 degrees). Plexiglass bases were used to simulate implant angulations. The base and the test device were positioned in a testing apparatus, which simulated insertion/removal of an overdenture. A total of 2900 cycles, simulating 2 years of overdenture use, were performed and 36 O-rings were tested. The force required for each cycle was recorded with computer software. Longitudinal sections of ball attachment-positioner-cylinder with O-rings of each angulation were obtained to analyze the relationship among them, and O-ring sections tested in each angulation were compared with an unused counterpart. A mixed linear model was used to analyze the data, and the comparison was performed by orthogonal contrasts (alpha=0.05). Results At 0 degrees, the retention force decreased significantly over time, and the retention force was significantly different in all comparisons, except from 12 to 18 months. When the implants were positioned at 7 degrees, the retention force was statistically different at 0 and 24 months. At 14 degrees, significant differences were found from 6 and 12 to 24 months. Conclusions Within the limitations of this study, it was concluded that O-rings for implant/attachments perpendicular to the occlusal plane were adequately retentive over the first year and that the retentive capacity of O-ring was affected by implant inclinations despite the proposed positioners. To cite this article:Rodrigues RCS, Faria ACL, Macedo AP, Sartori IAM, de Mattos MGC, Ribeiro RF. An in vitro study of non-axial forces upon the retention of an O-ring attachment.Clin. Oral Impl. Res. 20, 2009; 1314-1319.doi: 10.1111/j.1600-0501.2009.01742.x.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. In order to acquire and study the signals an experimental setup is implemented. The signals are treated through signal processing tools such as the fast Fourier transform and the short time Fourier transform. The results show that the Fourier spectrum of several signals presents a non integer behavior. The experimental study provides valuable results that can assist in the design of a control system to deal with the unwanted effects of vibrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a predictive analytical and numerical modeling approach for the orthogonal cutting process is proposed to calculate temperature distributions and subsequently, forces and stress distributions. The models proposed include a constitutive model for the material being cut based on the work of Weber, a model for the shear plane based on Merchants model, a model describing the contribution of friction based on Zorev’s approach, a model for the effect of wear on the tool based on the work of Waldorf, and a thermal model based on the works of Komanduri and Hou, with a fraction heat partition for a non-uniform distribution of the heat in the interfaces, but extended to encompass a set of contributions to the global temperature rise of chip, tool and work piece. The models proposed in this work, try to avoid from experimental based values or expressions, and simplifying assumptions or suppositions, as much as possible. On a thermo-physical point of view, the results were affected not only by the mechanical or cutting parameters chosen, but also by their coupling effects, instead of the simplifying way of modeling which is to contemplate only the direct effect of the variation of a parameter. The implementation of these models was performed using the MATLAB environment. Since it was possible to find in the literature all the parameters for AISI 1045 and AISI O2, these materials were used to run the simulations in order to avoid arbitrary assumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Accurate placement of an external ventricular drain (EVD) for the treatment of hydrocephalus is of paramount importance for its functionality and in order to minimize morbidity and complications. The aim of this study was to compare two different drain insertion assistance tools with the traditional free-hand anatomical landmark method, and to measure efficacy, safety and precision. METHODS: Ten cadaver heads were prepared by opening large bone windows centered on Kocher's points on both sides. Nineteen physicians, divided in two groups (trainees and board certified neurosurgeons) performed EVD insertions. The target for the ventricular drain tip was the ipsilateral foramen of Monro. Each participant inserted the external ventricular catheter in three different ways: 1) free-hand by anatomical landmarks, 2) neuronavigation-assisted (NN), and 3) XperCT-guided (XCT). The number of ventricular hits and dangerous trajectories; time to proceed; radiation exposure of patients and physicians; distance of the catheter tip to target and size of deviations projected in the orthogonal plans were measured and compared. RESULTS: Insertion using XCT increased the probability of ventricular puncture from 69.2 to 90.2 % (p = 0.02). Non-assisted placements were significantly less precise (catheter tip to target distance 14.3 ± 7.4 mm versus 9.6 ± 7.2 mm, p = 0.0003). The insertion time to proceed increased from 3.04 ± 2.06 min. to 7.3 ± 3.6 min. (p < 0.001). The X-ray exposure for XCT was 32.23 mSv, but could be reduced to 13.9 mSv if patients were initially imaged in the hybrid-operating suite. No supplementary radiation exposure is needed for NN if patients are imaged according to a navigation protocol initially. CONCLUSION: This ex vivo study demonstrates a significantly improved accuracy and safety using either NN or XCT-assisted methods. Therefore, efforts should be undertaken to implement these new technologies into daily clinical practice. However, the accuracy versus urgency of an EVD placement has to be balanced, as the image-guided insertion technique will implicate a longer preparation time due to a specific image acquisition and trajectory planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the extent and limits of non-state authority in international affairs. While a number of studies have emphasised the role of state support and the ability of strategically situated actors to capture regulatory processes, they often fail to unpack the conditions under which this takes place. In order to probe the assumption that structural market power, backed by political support, equates regulatory capture, the article examines the interplay of political and economic considerations in the negotiations to establish worldwide interoperability standards needed for the development of Galileo as a genuinely European global navigation satellite system under civil control. It argues that industries supported and identified as strategic by public actors are more likely to capture standardisation processes than those with the largest market share expected to be created by the standards. This suggests that the influence of industries in space, air and maritime traffic control closely related to the militaro-industrial complex remains disproportionate in comparison to the prospective market of location-based services expected to vastly transform business practices, labour relations and many aspects of our daily life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A discussion on the expression proposed in [1]–[3]for deconvolving the wideband density function is presented. Weprove here that such an expression reduces to be proportionalto the wideband correlation receiver output, or continuous wavelettransform of the received signal with respect to the transmittedone. Moreover, we show that the same result has been implicitlyassumed in [1], when the deconvolution equation is derived. Westress the fact that the analyzed approach is just the orthogonalprojection of the density function onto the image of the wavelettransform with respect to the transmitted signal. Consequently,the approach can be considered a good representation of thedensity function only under the prior knowledge that the densityfunction belongs to such a subspace. The choice of the transmittedsignal is thus crucial to this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed mathematical analysis on the q = 1/2 non-extensive maximum entropydistribution of Tsallis' is undertaken. The analysis is based upon the splitting of such adistribution into two orthogonal components. One of the components corresponds to theminimum norm solution of the problem posed by the fulfillment of the a priori conditionson the given expectation values. The remaining component takes care of the normalizationconstraint and is the projection of a constant onto the Null space of the "expectation-values-transformation"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The autonomic nervous system plays an important role in physiological and pathological conditions, and has been extensively evaluated by parametric and non-parametric spectral analysis. To compare the results obtained with fast Fourier transform (FFT) and the autoregressive (AR) method, we performed a comprehensive comparative study using data from humans and rats during pharmacological blockade (in rats), a postural test (in humans), and in the hypertensive state (in both humans and rats). Although postural hypotension in humans induced an increase in normalized low-frequency (LFnu) of systolic blood pressure, the increase in the ratio was detected only by AR. In rats, AR and FFT analysis did not agree for LFnu and high frequency (HFnu) under basal conditions and after vagal blockade. The increase in the LF/HF ratio of the pulse interval, induced by methylatropine, was detected only by FFT. In hypertensive patients, changes in LF and HF for systolic blood pressure were observed only by AR; FFT was able to detect the reduction in both blood pressure variance and total power. In hypertensive rats, AR presented different values of variance and total power for systolic blood pressure. Moreover, AR and FFT presented discordant results for LF, LFnu, HF, LF/HF ratio, and total power for pulse interval. We provide evidence for disagreement in 23% of the indices of blood pressure and heart rate variability in humans and 67% discordance in rats when these variables are evaluated by AR and FFT under physiological and pathological conditions. The overall disagreement between AR and FFT in this study was 43%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The text examines Sergej Nikolajeviè Bulgakov's description of the philosopheme as thoroughly "immanent" (viz., the immanence of man qua being, such that ontology in Bulgakov becomes a conceptual analogue for immanence) and the corollary that such immanence necessarily excludes the problematic of the "creation of the world." Because of this resolute immanence and the notion that the creation of the world in the form of creatio ex nihilo requires a non-immanent or non-ontological thought and concept, the problematic for Bulgakov is approached only by a theologeme. Appropriating this argument as material for a cursory philosopheme, the text attempts to transform Bulgakov's theologeme into a philosopheme through an elision of God and dogma that overdetermines the theologeme. This philosopheme (nascent within Bulgakov's work itself, in both his hesitation to the overdetermination of immanence and the commitment to the problem of creation) would be a thoroughly non-ontological philosopheme, one that allows for the treatment of the problematic of "creation" or singular ontogenesis, yet with the corollary that this philosopheme must rely on an "ontological zero" Such a philosopheme qua ontologically empty formula nevertheless remains ontologically significant insofar as it is to evince the limit of ontology, in the ontological zero's non-relationality to ontology.