917 resultados para Wavelet Packet Decomposition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] Background: The paradox of health refers to the improvement in objective measures of health and the increase in the reported prevalence of chronic conditions. The objective of this paper is to test the paradox of health in Catalonia from 1994 to 2006. Methods: Longitudinal cross-sectional study using the Catalonia Health Interview Survey of 1994 and 2006. The approach used was the three-fold Blinder - Oaxaca decomposition, separating the part of the differential in mean visual analogue scale value (VAS) due to group differences in the predictors (prevalence effect), due to differences in the coefficients (severity effect), and an interaction term. Variables included were the VAS value, education level, labour status, marital status, all common chronic conditions over the two cross-sections, and a variable for non-common chronic conditions and other conditions. Sample weights have been applied. Results: Results show that there is an increase in mean VAS for men aged 15-44, and a decrease in mean VAS for women aged 65-74 and 75 and more. The increase in mean VAS for men aged 15-44 could be explained by a decrease in the severity effect, which offsets the increase in the prevalence effect. The decrease in mean VAS for women aged 65-74 and 75 and more could be explained by an increase in the prevalence effect, which does not offset the decrease in the severity effect. Conclusions: The results of the present analysis corroborate the paradox of health hypothesis for the population of Catalonia, and highlight the need to be careful when measuring population health over time, as well as their usefulness to detect population's perceptions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[ES] El interés científico en la meditación ha crecido significativamente en las últimas décadas. La meditación es, tal vez, la práctica más adecuada para investigar las propiedades intrínsecas del Sistema nervioso autónomo (SNA), dado que conlleva un estado de total inmovilidad física y de cierto aislamiento del exterior (interiorización). En meditación, ya que no hay movimiento físico, el patrón respiratorio es ajustado según el proceso mental. Así, la modulación que ejerce la respiración sobre la frecuencia cardiaca está relacionada a la cualidad y al enfoque de la atención en la práctica. De los resultados obtenidos en nuestra investigación, podemos concluir que hay patrones específicos de variabilidad de la frecuencia cardiaca (VFC) que parecen reflejar fases o etapas en la práctica. Así, sujetos con una experiencia en meditación similar tienden a mostrar patrones análogos de variabilidad cardiaca. A medida que se progresa en la práctica meditativa, los diferentes sistemas oscilantes tienden a interaccionar entre ellos, hasta culminar con la aparición de un efecto resonante que establece un ?nuevo orden? en el sistema. Este proceso parece reflejar cambios graduales en la actividad del SNA para alcanzar un "modo de funcionamiento de bajo coste", donde los diversos mecanismos oscilatorios que intervienen en el control de la circulación sanguínea operan a la misma frecuencia. El fenómeno de resonancia implica un ?modo de funcionamiento de bajo coste? que probablemente favorece la práctica de la meditación. Así, este estado de ?orden? (aunque no sin variabilidad) podría ser considerado un atractor, al cual el sistema tiende a evolucionar cuando se haya alcanzado un nivel avanzado de mindfulness. El concepto de atractor, procedente de las modernas teorías que tratan con la dinámica de sistemas complejos no-lineales, parece mostrarse útil para describir de manera heurística el comportamiento del sistema en estados meditativos profundos. Los resultados obtenidos en esta tesis apoyan y complementan otros trabajos anteriores, además se añade la idea de una adaptación fisiológica gradual a la práctica de la meditación mindfulness, caracterizada por cambios específicos en la regulación autonómica de la VFC en las diferentes etapas de la práctica. Para el análisis de las series fisiológicas, de carácter fuertemente no lineal, se han implementado técnicas basadas en el análisis Wavelet y Dinámica Simbólica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'utilizzo del materiale è concesso ai soli studenti iscritti al corso nell'anno accademico in corso, in quanto esso è coperto da copyright internazionale; la Facoltà di Economia di Forlì ha già provveduto a sostenere i relativi costi per gli studenti iscritti. Qualora altri studenti non appartenenti al corso fossero interessati a partecipare, dovranno mettersi in contatto con il prof. Emanuele Padovani per regolarizzare il pagamento dei diritti d'autore. Per ogni altra informazione sull'utilizzo del materiale e sul copyright, si rinvia alla prima pagina dell'E-packet. ENGLISH VERSION: You can use the E-packet only if you are enrolled in this course for the current academic year, because it is copyrighted under international copyright laws and the Faculty of Economics of Forlì has paid just the amount for this year's students. If you are interested in this material, you can ask information to Professor Emanuele Padovani. For any further information on the use of this material, please read the disclaimer contained in the first page of it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le wavelet sono una nuova famiglia di funzioni matematiche che permettono di decomporre una data funzione nelle sue diverse componenti in frequenza. Esse combinano le proprietà dell’ortogonalità, il supporto compatto, la localizzazione in tempo e frequenza e algoritmi veloci. Sono considerate, perciò, uno strumento versatile sia per il contenuto matematico, sia per le applicazioni. Nell’ultimo decennio si sono diffuse e imposte come uno degli strumenti migliori nell’analisi dei segnali, a fianco, o addirittura come sostitute, dei metodi di Fourier. Si parte dalla nascita di esse (1807) attribuita a J. Fourier, si considera la wavelet di A. Haar (1909) per poi incentrare l’attenzione sugli anni ’80, in cui J. Morlet e A. Grossmann definiscono compiutamente le wavelet nel campo della fisica quantistica. Altri matematici e scienziati, nel corso del Novecento, danno il loro contributo a questo tipo di funzioni matematiche. Tra tutti emerge il lavoro (1987) della matematica e fisica belga, I. Daubechies, che propone le wavelet a supporto compatto, considerate la pietra miliare delle applicazioni wavelet moderne. Dopo una trattazione matematica delle wavalet, dei relativi algoritmi e del confronto con il metodo di Fourier, si passano in rassegna le principali applicazioni di esse nei vari campi: compressione delle impronte digitali, compressione delle immagini, medicina, finanza, astonomia, ecc. . . . Si riserva maggiore attenzione ed approfondimento alle applicazioni delle wavelet in campo sonoro, relativamente alla compressione audio, alla rimozione del rumore e alle tecniche di rappresentazione del segnale. In conclusione si accenna ai possibili sviluppi e impieghi delle wavelet nel futuro.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Un livello di sicurezza che prevede l’autenticazione e autorizzazione di un utente e che permette di tenere traccia di tutte le operazioni effettuate, non esclude una rete dall’essere soggetta a incidenti informatici, che possono derivare da tentativi di accesso agli host tramite innalzamento illecito di privilegi o dai classici programmi malevoli come virus, trojan e worm. Un rimedio per identificare eventuali minacce prevede l’utilizzo di un dispositivo IDS (Intrusion Detection System) con il compito di analizzare il traffico e confrontarlo con una serie d’impronte che fanno riferimento a scenari d’intrusioni conosciute. Anche con elevate capacità di elaborazione dell’hardware, le risorse potrebbero non essere sufficienti a garantire un corretto funzionamento del servizio sull’intero traffico che attraversa una rete. L'obiettivo di questa tesi consiste nella creazione di un’applicazione con lo scopo di eseguire un’analisi preventiva, in modo da alleggerire la mole di dati da sottoporre all’IDS nella fase di scansione vera e propria del traffico. Per fare questo vengono sfruttate le statistiche calcolate su dei dati forniti direttamente dagli apparati di rete, cercando di identificare del traffico che utilizza dei protocolli noti e quindi giudicabile non pericoloso con una buona probabilità.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis regards the Wireless Sensor Network (WSN), as one of the most important technologies for the twenty-first century and the implementation of different packet correcting erasure codes to cope with the ”bursty” nature of the transmission channel and the possibility of packet losses during the transmission. The limited battery capacity of each sensor node makes the minimization of the power consumption one of the primary concerns in WSN. Considering also the fact that in each sensor node the communication is considerably more expensive than computation, this motivates the core idea to invest computation within the network whenever possible to safe on communication costs. The goal of the research was to evaluate a parameter, for example the Packet Erasure Ratio (PER), that permit to verify the functionality and the behavior of the created network, validate the theoretical expectations and evaluate the convenience of introducing the recovery packet techniques using different types of packet erasure codes in different types of networks. Thus, considering all the constrains of energy consumption in WSN, the topic of this thesis is to try to minimize it by introducing encoding/decoding algorithms in the transmission chain in order to prevent the retransmission of the erased packets through the Packet Erasure Channel and save the energy used for each retransmitted packet. In this way it is possible extend the lifetime of entire network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).