224 resultados para convolution
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Non-intrusive methods of diagnosis, such as spectral analysis of the radiation emitted by the system, have been used as a viable alternative for determining the temperature of combustion systems. Among them, the determination of temperature by natural emission spectroscopy has the advantage of requiring relatively simple experimental devices. Once Chemiluminescent species are formed directly in the excited state, the collection and recording of radiation emission spectrum is enough to determine the temperature (CARINHANA, 2008). In this study we used the process of making direct comparisons between the experimental spectra obtained in the laboratory from the plasma of alcohol, and the theoretical spectra plotted from a computer program developed at the IEAv. The objective was to establish a fast and reliable method to measure the rotational temperature of the radical C2*. The results showed that the temperature of the plasma, which in turn can be taken as the rotational temperature of the system, is proportional to the pressure. The temperature values ranged from ca. 2300 ~ 2500 K at a pressure of 19 mmHg to 3100 ~ 3500 K for the pressure of 46 mmHg. The temperature values are somewhat smaller when we consider the theoretical spectrum as a Lorentzian curve. The overlap of the spectra was better when using the profile curve, but still were not exactly superimposed. The solution to improve the overlap of the theoretical with the experimental spectra is the use of a curve that has the convolution of two profiles analyzed: Lorentzian and Gaussian. This curve is called the Voigt profile, which will also be implemented by programmers and studied in a next work
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The linearity assumption in the structural dynamics analysis is a severe practical limitation. Further, in the investigation of mechanisms presented in fighter aircrafts, as for instance aeroelastic nonlinearity, friction or gaps in wing-load-payload mounting interfaces, is mandatory to use a nonlinear analysis technique. Among different approaches that can be used to this matter, the Volterra theory is an interesting strategy, since it is a generalization of the linear convolution. It represents the response of a nonlinear system as a sum of linear and nonlinear components. Thus, this paper aims to use the discrete-time version of Volterra series expanded with Kautz filters to characterize the nonlinear dynamics of a F-16 aircraft. To illustrate the approach, it is identified and characterized a non-parametric model using the data obtained during a ground vibration test performed in a F-16 wing-to-payload mounting interfaces. Several amplitude inputs applied in two shakers are used to show softening nonlinearities presented in the acceleration data. The results obtained in the analysis have shown the capability of the Volterra series to give some insight about the nonlinear dynamics of the F-16 mounting interfaces. The biggest advantage of this approach is to separate the linear and nonlinear contributions through the multiple convolutions through the Volterra kernels.
Resumo:
A transmission line is characterized by the fact that its parameters are distributed along its length. This fact makes the voltages and currents along the line to behave like waves and these are described by differential equations. In general, the differential equations mentioned are difficult to solve in the time domain, due to the convolution integral, but in the frequency domain these equations become simpler and their solutions are known. The transmission line can be represented by a cascade of π circuits. This model has the advantage of being developed directly in the time domain, but there is a need to apply numerical integration methods. In this work a comparison of the model that considers the fact that the parameters are distributed (Universal Line Model) and the fact that the parameters considered concentrated along the line (π circuit model) using the trapezoidal integration method, and Simpson's rule Runge-Kutta in a single-phase transmission line length of 100 km subjected to an operation power. © 2003-2012 IEEE.
Resumo:
The cellular rheology has recently undergone a rapid development with particular attention to the cytoskeleton mechanical properties and its main components - actin filaments, intermediate filaments, microtubules and crosslinked proteins. However it is not clear what are the cellular structural changes that directly affect the cell mechanical properties. Thus, in this work, we aimed to quantify the structural rearrangement of these fibers that may emerge in changes in the cell mechanics. We created an image analysis platform to study smooth muscle cells from different arteries: aorta, mammary, renal, carotid and coronary and processed respectively 31, 29, 31, 30 and 35 cell image obtained by confocal microscopy. The platform was developed in Matlab (MathWorks) and it uses the Sobel operator to determine the actin fiber image orientation of the cell, labeled with phalloidin. The Sobel operator is used as a filter capable of calculating the pixel brightness gradient, point to point, in the image. The operator uses vertical and horizontal convolution kernels to calculate the magnitude and the angle of the pixel intensity gradient. The image analysis followed the sequence: (1) opens a given cells image set to be processed; (2) sets a fix threshold to eliminate noise, based on Otsu's method; (3) detect the fiber edges in the image using the Sobel operator; and (4) quantify the actin fiber orientation. Our first result is the probability distribution II(Δθ) to find a given fiber angle deviation (Δθ) from the main cell fiber orientation θ0. The II(Δθ) follows an exponential decay II(Δθ) = Aexp(-αΔθ) regarding to its θ0. We defined and determined a misalignment index α of the fibers of each artery kind: coronary αCo = (1.72 ‘+ or =’ 0.36)rad POT -1; renal αRe = (1.43 + or - 0.64)rad POT -1; aorta αAo = (1.42 + or - 0.43)rad POT -1; mammary αMa = (1.12 + or - 0.50)rad POT -1; and carotid αCa = (1.01 + or - 0.39)rad POT -1. The α of coronary and carotid are statistically different (p < 0.05) among all analyzed cells. We discussed our results correlating the misalignment index data with the experimental cell mechanical properties obtained by using Optical Magnetic Twisting Cytometry with the same group of cells.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
We investigate the statics and dynamics of a glassy,non-entangled, short bead-spring polymer melt with moleculardynamics simulations. Temperature ranges from slightlyabove the mode-coupling critical temperature to the liquidregime where features of a glassy liquid are absent. Ouraim is to work out the polymer specific effects on therelaxation and particle correlation. We find the intra-chain static structure unaffected bytemperature, it depends only on the distance of monomersalong the backbone. In contrast, the distinct inter-chainstructure shows pronounced site-dependence effects at thelength-scales of the chain and the nearest neighbordistance. There, we also find the strongest temperaturedependence which drives the glass transition. Both the siteaveraged coupling of the monomer and center of mass (CM) andthe CM-CM coupling are weak and presumably not responsiblefor a peak in the coherent relaxation time at the chain'slength scale. Chains rather emerge as soft, easilyinterpenetrating objects. Three particle correlations arewell reproduced by the convolution approximation with theexception of model dependent deviations. In the spatially heterogeneous dynamics of our system weidentify highly mobile monomers which tend to follow eachother in one-dimensional paths forming ``strings''. Thesestrings have an exponential length distribution and aregenerally short compared to the chain length. Thus, arelaxation mechanism in which neighboring mobile monomersmove along the backbone of the chain seems unlikely.However, the correlation of bonded neighbors is enhanced. When liquids are confined between two surfaces in relativesliding motion kinetic friction is observed. We study ageneric model setup by molecular dynamics simulations for awide range of sliding speeds, temperatures, loads, andlubricant coverings for simple and molecular fluids. Instabilities in the particle trajectories are identified asthe origin of kinetic friction. They lead to high particlevelocities of fluid atoms which are gradually dissipatedresulting in a friction force. In commensurate systemsfluid atoms follow continuous trajectories for sub-monolayercoverings and consequently, friction vanishes at low slidingspeeds. For incommensurate systems the velocity probabilitydistribution exhibits approximately exponential tails. Weconnect this velocity distribution to the kinetic frictionforce which reaches a constant value at low sliding speeds. This approach agrees well with the friction obtaineddirectly from simulations and explains Amontons' law on themicroscopic level. Molecular bonds in commensurate systemslead to incommensurate behavior, but do not change thequalitative behavior of incommensurate systems. However,crossed chains form stable load bearing asperities whichstrongly increase friction.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
In questa tesi si è studiato un metodo per modellare e virtualizzare tramite algoritmi in Matlab le distorsioni armoniche di un dispositivo audio non lineare, ovvero uno “strumento” che, sollecitato da un segnale audio, lo modifichi, introducendovi delle componenti non presenti in precedenza. Il dispositivo che si è scelto per questo studio il pedale BOSS SD-1 Super OverDrive per chitarra elettrica e lo “strumento matematico” che ne fornisce il modello è lo sviluppo in serie di Volterra. Lo sviluppo in serie di Volterra viene diffusamente usato nello studio di sistemi fisici non lineari, nel caso in cui si abbia interesse a modellare un sistema che si presenti come una “black box”. Il metodo della Nonlinear Convolution progettato dall'Ing. Angelo Farina ha applicato con successo tale sviluppo anche all'ambito dell'acustica musicale: servendosi di una tecnica di misurazione facilmente realizzabile e del modello fornito dalla serie di Volterra Diagonale, il metodo permette di caratterizzare un dispositivo audio non lineare mediante le risposte all'impulso non lineari che il dispositivo fornisce a fronte di un opportuno segnale di test (denominato Exponential Sine Sweep). Le risposte all'impulso del dispositivo vengono utilizzate per ricavare i kernel di Volterra della serie. L'utilizzo di tale metodo ha permesso all'Università di Bologna di ottenere un brevetto per un software che virtualizzasse in post-processing le non linearità di un sistema audio. In questa tesi si è ripreso il lavoro che ha portato al conseguimento del brevetto, apportandovi due innovazioni: si è modificata la scelta del segnale utilizzato per testare il dispositivo (si è fatto uso del Synchronized Sine Sweep, in luogo dell'Exponential Sine Sweep); si è messo in atto un primo tentativo di orientare la virtualizzazione verso l'elaborazione in real-time, implementando un procedimento (in post-processing) di creazione dei kernel in dipendenza dal volume dato in input al dispositivo non lineare.
Resumo:
The Rankin convolution type Dirichlet series D-F,D-G(s) of Siegel modular forms F and G of degree two, which was introduced by Kohnen and the second author, is computed numerically for various F and G. In particular, we prove that the series D-F,D-G(s), which shares the same functional equation and analytic behavior with the spinor L-functions of eigenforms of the same weight are not linear combinations of those. In order to conduct these experiments a numerical method to compute the Petersson scalar products of Jacobi Forms is developed and discussed in detail.
Resumo:
OBJECTIVES: This study sought to evaluate the diagnostic accuracy of coronary binary in-stent restenosis (ISR) with angiography using 64-slice multislice computed tomography coronary angiography (CTCA) compared with invasive coronary angiography (ICA). BACKGROUND: A noninvasive detection of ISR would result in an easier and safer way to conduct patient follow-up. METHODS: We performed CTCA in 81 patients after stent implantation, and 125 stented lesions were scanned. Two sets of images were reconstructed with different types of convolution kernels. On CTCA, neointimal proliferation was visually evaluated according to luminal contrast attenuation inside the stent. Lesions were graded as follows: grade 1, none or slight neointimal proliferation; grade 2, neointimal proliferation with no significant stenosis (<50%); grade 3, neointimal proliferation with moderate stenosis (> or =50%); and grade 4, neointimal proliferation with severe stenosis (> or =75%). Grades 3 and 4 were considered binary ISR. The diagnostic accuracy of CTCA compared with ICA was evaluated. RESULTS: By ICA, 24 ISRs were diagnosed. Sensitivity, specificity, positive predictive value, and negative predictive value were 92%, 81%, 54%, and 98% for the overall population, whereas values were 91%, 93%, 77%, and 98% when excluding unassessable segments (15 segments, 12%). For assessable segments, CTCA correctly diagnosed 20 of the 22 ISRs detected by ICA. Six lesions without ISR were overestimated as ISR by CTCA. As the grade of neointimal proliferation by CTCA increases, the median value of percent diameter stenosis increased linearly. CONCLUSIONS: Binary ISR can be excluded with high probability by CTCA, with a moderate rate of false-positive results.
Resumo:
PURPOSE Hodgkin lymphoma (HL) is a highly curable disease. Reducing late complications and second malignancies has become increasingly important. Radiotherapy target paradigms are currently changing and radiotherapy techniques are evolving rapidly. DESIGN This overview reports to what extent target volume reduction in involved-node (IN) and advanced radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT) and proton therapy-compared with involved-field (IF) and 3D radiotherapy (3D-RT)- can reduce high doses to organs at risk (OAR) and examines the issues that still remain open. RESULTS Although no comparison of all available techniques on identical patient datasets exists, clear patterns emerge. Advanced dose-calculation algorithms (e.g., convolution-superposition/Monte Carlo) should be used in mediastinal HL. INRT consistently reduces treated volumes when compared with IFRT with the exact amount depending on the INRT definition. The number of patients that might significantly benefit from highly conformal techniques such as IMRT over 3D-RT regarding high-dose exposure to organs at risk (OAR) is smaller with INRT. The impact of larger volumes treated with low doses in advanced techniques is unclear. The type of IMRT used (static/rotational) is of minor importance. All advanced photon techniques result in similar potential benefits and disadvantages, therefore only the degree-of-modulation should be chosen based on individual treatment goals. Treatment in deep inspiration breath hold is being evaluated. Protons theoretically provide both excellent high-dose conformality and reduced integral dose. CONCLUSION Further reduction of treated volumes most effectively reduces OAR dose, most likely without disadvantages if the excellent control rates achieved currently are maintained. For both IFRT and INRT, the benefits of advanced radiotherapy techniques depend on the individual patient/target geometry. Their use should therefore be decided case by case with comparative treatment planning.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.