870 resultados para Time-frequency distribution
Resumo:
El retroceso de las costas acantiladas es un fenómeno muy extendido sobre los litorales rocosos expuestos a la incidencia combinada de los procesos marinos y meteorológicos que se dan en la franja costera. Este fenómeno se revela violentamente como movimientos gravitacionales del terreno esporádicos, pudiendo causar pérdidas materiales y/o humanas. Aunque el conocimiento de estos riesgos de erosión resulta de vital importancia para la correcta gestión de la costa, el desarrollo de modelos predictivos se encuentra limitado desde el punto de vista geomorfológico debido a la complejidad e interacción de los procesos de desarrollo espacio-temporal que tienen lugar en la zona costera. Los modelos de predicción publicados son escasos y con importantes inconvenientes: a) extrapolación, extienden la información de registros históricos; b) empíricos, sobre registros históricos estudian la respuesta al cambio de un parámetro; c) estocásticos, determinan la cadencia y magnitud de los eventos futuros extrapolando las distribuciones de probabilidad extraídas de catálogos históricos; d) proceso-respuesta, de estabilidad y propagación del error inexplorada; e) en Ecuaciones en Derivadas Parciales, computacionalmente costosos y poco exactos. La primera parte de esta tesis detalla las principales características de los modelos más recientes de cada tipo y, para los más habitualmente utilizados, se indican sus rangos de aplicación, ventajas e inconvenientes. Finalmente como síntesis de los procesos más relevantes que contemplan los modelos revisados, se presenta un diagrama conceptual de la recesión costera, donde se recogen los procesos más influyentes que deben ser tenidos en cuenta, a la hora de utilizar o crear un modelo de recesión costera con el objetivo de evaluar la peligrosidad (tiempo/frecuencia) del fenómeno a medio-corto plazo. En esta tesis se desarrolla un modelo de proceso-respuesta de retroceso de acantilados costeros que incorpora el comportamiento geomecánico de materiales cuya resistencia a compresión no supere los 5 MPa. El modelo simula la evolución espaciotemporal de un perfil-2D del acantilado que puede estar formado por materiales heterogéneos. Para ello, se acoplan la dinámica marina: nivel medio del mar, cambios en el nivel medio del lago, mareas y oleaje; con la evolución del terreno: erosión, desprendimiento rocoso y formación de talud de derrubios. El modelo en sus diferentes variantes es capaz de incluir el análisis de la estabilidad geomecánica de los materiales, el efecto de los derrubios presentes al pie del acantilado, el efecto del agua subterránea, la playa, el run-up, cambios en el nivel medio del mar o cambios (estacionales o interanuales) en el nivel medio de la masa de agua (lagos). Se ha estudiado el error de discretización del modelo y su propagación en el tiempo a partir de las soluciones exactas para los dos primeros periodos de marea para diferentes aproximaciones numéricas tanto en tiempo como en espacio. Los resultados obtenidos han permitido justificar las elecciones que minimizan el error y los métodos de aproximación más adecuados para su posterior uso en la modelización. El modelo ha sido validado frente a datos reales en la costa de Holderness, Yorkshire, Reino Unido; y en la costa norte del lago Erie, Ontario, Canadá. Los resultados obtenidos presentan un importante avance en los modelos de recesión costera, especialmente en su relación con las condiciones geomecánicas del medio, la influencia del agua subterránea, la verticalización de los perfiles rocosos y su respuesta ante condiciones variables producidas por el cambio climático (por ejemplo, nivel medio del mar, cambios en los niveles de lago, etc.). The recession of coastal cliffs is a widespread phenomenon on the rocky shores that are exposed to the combined incidence of marine and meteorological processes that occur in the shoreline. This phenomenon is revealed violently and occasionally, as gravitational movements of the ground and can cause material or human losses. Although knowledge of the risks of erosion is vital for the proper management of the coast, the development of cliff erosion predictive models is limited by the complex interactions between environmental processes and material properties over a range of temporal and spatial scales. Published prediction models are scarce and present important drawbacks: extrapolation, that extend historical records to the future; empirical, that based on historical records studies the system response against the change in one parameter; stochastic, that represent of cliff behaviour based on assumptions regarding the magnitude and frequency of events in a probabilistic framework based on historical records; process-response, stability and error propagation unexplored; PDE´s, highly computationally expensive and not very accurate. The first part of this thesis describes the main features of the latest models of each type and, for the most commonly used, their ranges of application, advantages and disadvantages are given. Finally as a synthesis of the most relevant processes that include the revised models, a conceptual diagram of coastal recession is presented. This conceptual model includes the most influential processes that must be taken into account when using or creating a model of coastal recession to evaluate the dangerousness (time/frequency) of the phenomenon to medium-short term. A new process-response coastal recession model developed in this thesis has been designed to incorporate the behavioural and mechanical characteristics of coastal cliffs which are composed of with materials whose compressive strength is less than 5 MPa. The model simulates the spatial and temporal evolution of a cliff-2D profile that can consist of heterogeneous materials. To do so, marine dynamics: mean sea level, waves, tides, lake seasonal changes; is coupled with the evolution of land recession: erosion, cliff face failure and associated protective colluvial wedge. The model in its different variants can include analysis of material geomechanical stability, the effect of debris present at the cliff foot, groundwater effects, beach and run-up effects, changes in the mean sea level or changes (seasonal or inter-annual) in the mean lake level. Computational implementation and study of different numerical resolution techniques, in both time and space approximations, and the produced errors are exposed and analysed for the first two tidal periods. The results obtained in the errors analysis allow us to operate the model with a configuration that minimizes the error of the approximation methods. The model is validated through profile evolution assessment at various locations of coastline retreat on the Holderness Coast, Yorkshire, UK and on the north coast of Lake Erie, Ontario, Canada. The results represent an important stepforward in linking material properties to the processes of cliff recession, in considering the effect of groundwater charge and the slope oversteeping and their response to changing conditions caused by climate change (i.e. sea level, changes in lakes levels, etc.).
Resumo:
La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.
Resumo:
Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration
Resumo:
By spectral analysis, and using joint time-frequency representations, we present the theoretical basis to design invariant bandlimited Airy pulses with an arbitrary degree of robustness and an arbitrary range of single-mode fiber chromatic dispersion. The numerically simulated examples confirm the theoretically predicted pulse partial invariance in the propagation of the pulse in the fiber.
Resumo:
The quality and the reliability of the power generated by large grid-connected photovoltaic (PV) plants are negatively affected by the source characteristic variability. This paper deals with the smoothing of power fluctuations because of geographical dispersion of PV systems. The fluctuation frequency and the maximum fluctuation registered at a PV plant ensemble are analyzed to study these effects. We propose an empirical expression to compare the fluctuation attenuation because of both the size and the number of PV plants grouped. The convolution of single PV plants frequency distribution functions has turned out to be a successful tool to statistically describe the behavior of an ensemble of PV plants and determine their maximum output fluctuation. Our work is based on experimental 1-s data collected throughout 2009 from seven PV plants, 20 MWp in total, separated between 6 and 360 km.
Resumo:
In this work we present a new way to mask the data in a one-user communication system when direct sequence - code division multiple access (DS-CDMA) techniques are used. The code is generated by a digital chaotic generator, originally proposed by us and previously reported for a chaos cryptographic system. It is demonstrated that if the user's data signal is encoded with a bipolar phase-shift keying (BPSK) technique, usual in DS-CDMA, it can be easily recovered from a time-frequency domain representation. To avoid this situation, a new system is presented in which a previous dispersive stage is applied to the data signal. A time-frequency domain analysis is performed, and the devices required at the transmitter and receiver end, both user-independent, are presented for the optical domain.
Resumo:
Esta tesis establece los fundamentos teóricos y diseña una colección abierta de clases C++ denominada VBF (Vector Boolean Functions) para analizar funciones booleanas vectoriales (funciones que asocian un vector booleano a otro vector booleano) desde una perspectiva criptográfica. Esta nueva implementación emplea la librería NTL de Victor Shoup, incorporando nuevos módulos que complementan a las funciones de NTL, adecuándolas para el análisis criptográfico. La clase fundamental que representa una función booleana vectorial se puede inicializar de manera muy flexible mediante diferentes estructuras de datas tales como la Tabla de verdad, la Representación de traza y la Forma algebraica normal entre otras. De esta manera VBF permite evaluar los criterios criptográficos más relevantes de los algoritmos de cifra en bloque y de stream, así como funciones hash: por ejemplo, proporciona la no-linealidad, la distancia lineal, el grado algebraico, las estructuras lineales, la distribución de frecuencias de los valores absolutos del espectro Walsh o del espectro de autocorrelación, entre otros criterios. Adicionalmente, VBF puede llevar a cabo operaciones entre funciones booleanas vectoriales tales como la comprobación de igualdad, la composición, la inversión, la suma, la suma directa, el bricklayering (aplicación paralela de funciones booleanas vectoriales como la empleada en el algoritmo de cifra Rijndael), y la adición de funciones coordenada. La tesis también muestra el empleo de la librería VBF en dos aplicaciones prácticas. Por un lado, se han analizado las características más relevantes de los sistemas de cifra en bloque. Por otro lado, combinando VBF con algoritmos de optimización, se han diseñado funciones booleanas cuyas propiedades criptográficas son las mejores conocidas hasta la fecha. ABSTRACT This thesis develops the theoretical foundations and designs an open collection of C++ classes, called VBF, designed for analyzing vector Boolean functions (functions that map a Boolean vector to another Boolean vector) from a cryptographic perspective. This new implementation uses the NTL library from Victor Shoup, adding new modules which complement the existing ones making VBF better suited for cryptography. The fundamental class representing a vector Boolean function can be initialized in a flexible way via several alternative types of data structures such as Truth Table, Trace Representation, Algebraic Normal Form (ANF) among others. This way, VBF allows the evaluation of the most relevant cryptographic criteria for block and stream ciphers as well as for hash functions: for instance, it provides the nonlinearity, the linearity distance, the algebraic degree, the linear structures, the frequency distribution of the absolute values of the Walsh Spectrum or the Autocorrelation Spectrum, among others. In addition, VBF can perform operations such as equality testing, composition, inversion, sum, direct sum, bricklayering (parallel application of vector Boolean functions as employed in Rijndael cipher), and adding coordinate functions of two vector Boolean functions. This thesis also illustrates the use of VBF in two practical applications. On the one hand, the most relevant properties of the existing block ciphers have been analysed. On the other hand, by combining VBF with optimization algorithms, new Boolean functions have been designed which have the best known cryptographic properties up-to-date.
Resumo:
La Ingeniería Biomédica surgió en la década de 1950 como una fascinante mezcla interdisciplinaria, en la cual la ingeniería, la biología y la medicina aunaban esfuerzos para analizar y comprender distintas enfermedades. Las señales existentes en este área deben ser analizadas e interpretadas, más allá de las capacidades limitadas de la simple vista y la experiencia humana. Aquí es donde el procesamiento digital de la señal se postula como una herramienta indispensable para extraer la información relevante oculta en dichas señales. La electrocardiografía fue una de las primeras áreas en las que se aplicó el procesado digital de señales hace más de 50 años. Las señales electrocardiográficas continúan siendo, a día de hoy, objeto de estudio por parte de cardiólogos e ingenieros. En esta área, las técnicas de procesamiento de señal han ayudado a encontrar información oculta a simple vista que ha cambiado la forma de tratar ciertas enfermedades que fueron ya diagnosticadas previamente. Desde entonces, se han desarrollado numerosas técnicas de procesado de señales electrocardiográficas, pudiéndose resumir estas en tres grandes categorías: análisis tiempo-frecuencia, análisis de organización espacio-temporal y separación de la actividad atrial del ruido y las interferencias. Este proyecto se enmarca dentro de la primera categoría, análisis tiempo-frecuencia, y en concreto dentro de lo que se conoce como análisis de frecuencia dominante, la cual se va a aplicar al análisis de señales de fibrilación auricular. El proyecto incluye una parte teórica de análisis y desarrollo de algoritmos de procesado de señal, y una parte práctica, de programación y simulación con Matlab. Matlab es una de las herramientas fundamentales para el procesamiento digital de señales por ordenador, la cual presenta importantes funciones y utilidades para el desarrollo de proyectos en este campo. Por ello, se ha elegido dicho software como herramienta para la implementación del proyecto. ABSTRACT. Biomedical Engineering emerged in the 1950s as a fascinating interdisciplinary blend, in which engineering, biology and medicine pooled efforts to analyze and understand different diseases. Existing signals in this area should be analyzed and interpreted, beyond the limited capabilities of the naked eye and the human experience. This is where the digital signal processing is postulated as an indispensable tool to extract the relevant information hidden in these signals. Electrocardiography was one of the first areas where digital signal processing was applied over 50 years ago. Electrocardiographic signals remain, even today, the subject of close study by cardiologists and engineers. In this area, signal processing techniques have helped to find hidden information that has changed the way of treating certain diseases that were already previously diagnosed. Since then, numerous techniques have been developed for processing electrocardiographic signals. These methods can be summarized into three categories: time-frequency analysis, analysis of spatio-temporal organization and separation of atrial activity from noise and interferences. This project belongs to the first category, time-frequency analysis, and specifically to what is known as dominant frequency analysis, which is one of the fundamental tools applied in the analysis of atrial fibrillation signals. The project includes a theoretical part, related to the analysis and development of signal processing algorithms, and a practical part, related to programming and simulation using Matlab. Matlab is one of the fundamental tools for digital signal processing, presenting significant functions and advantages for the development of projects in this field. Therefore, we have chosen this software as a tool for project implementation.
Resumo:
O Monitoramento Acústico Passivo (PAM) submarino refere-se ao uso de sistemas de escuta e gravação subaquática, com o intuito de detectar, monitorar e identificar fontes sonoras através das ondas de pressão que elas produzem. Se diz que é passivo já que tais sistemas unicamente ouvem, sem perturbam o meio ambiente acústico existente, diferentemente de ativos, como os sonares. O PAM submarino tem diversas áreas de aplicação, como em sistemas de vigilância militar, seguridade portuária, monitoramento ambiental, desenvolvimento de índices de densidade populacional de espécies, identificação de espécies, etc. Tecnologia nacional nesta área é praticamente inexistente apesar da sua importância. Neste contexto, o presente trabalho visa contribuir com o desenvolvimento de tecnologia nacional no tema através da concepção, construção e operação de equipamento autônomo de PAM e de métodos de processamento de sinais para detecção automatizada de eventos acústicos submarinos. Foi desenvolvido um equipamento, nomeado OceanPod, que possui características como baixo custo de fabrica¸c~ao, flexibilidade e facilidade de configuração e uso, voltado para a pesquisa científica, industrial e para controle ambiental. Vários protótipos desse equipamento foram construídos e utilizados em missões no mar. Essas jornadas de monitoramento permitiram iniciar a criação de um banco de dados acústico, o qual permitiu fornecer a matéria prima para o teste de detectores de eventos acústicos automatizados e em tempo real. Adicionalmente também é proposto um novo método de detecção-identificação de eventos acústicos, baseado em análise estatística da representação tempo-frequência dos sinais acústicos. Este novo método foi testado na detecção de cetáceos, presentes no banco de dados gerado pelas missões de monitoramento.
Resumo:
Falls are one of the greatest threats to elderly health in their daily living routines and activities. Therefore, it is very important to detect falls of an elderly in a timely and accurate manner, so that immediate response and proper care can be provided, by sending fall alarms to caregivers. Radar is an effective non-intrusive sensing modality which is well suited for this purpose, which can detect human motions in all types of environments, penetrate walls and fabrics, preserve privacy, and is insensitive to lighting conditions. Micro-Doppler features are utilized in radar signal corresponding to human body motions and gait to detect falls using a narrowband pulse-Doppler radar. Human motions cause time-varying Doppler signatures, which are analyzed using time-frequency representations and matching pursuit decomposition (MPD) for feature extraction and fall detection. The extracted features include MPD features and the principal components of the time-frequency signal representations. To analyze the sequential characteristics of typical falls, the extracted features are used for training and testing hidden Markov models (HMM) in different falling scenarios. Experimental results demonstrate that the proposed algorithm and method achieve fast and accurate fall detections. The risk of falls increases sharply when the elderly or patients try to exit beds. Thus, if a bed exit can be detected at an early stage of this motion, the related injuries can be prevented with a high probability. To detect bed exit for fall prevention, the trajectory of head movements is used for recognize such human motion. A head detector is trained using the histogram of oriented gradient (HOG) features of the head and shoulder areas from recorded bed exit images. A data association algorithm is applied on the head detection results to eliminate head detection false alarms. Then the three dimensional (3D) head trajectories are constructed by matching scale-invariant feature transform (SIFT) keypoints in the detected head areas from both the left and right stereo images. The extracted 3D head trajectories are used for training and testing an HMM based classifier for recognizing bed exit activities. The results of the classifier are presented and discussed in the thesis, which demonstrates the effectiveness of the proposed stereo vision based bed exit detection approach.
Resumo:
As anomalias craniofaciais ocasionam comprometimentos estéticos e funcionais com grande impacto na saúde e na integração social da criança, com interferência no desenvolvimento global e social. Das anomalias craniofaciais este estudo abordou as Fissuras Labiopalatinas (FLP) e o Espectro Óculo Aurículo Vertebral (EOAV). As FLP constituem malformações resultantes de falta do fechamento completo dos tecidos que compõe o lábio e o palato. O EOAV, também conhecido como Síndrome de Goldenhar, é uma anomalia congênita de etiologia desconhecida, com manifestação genética variável e de causa bastante heterogênea. Conhecer as habilidades funcionais e o impacto destas no desenvolvimento global de crianças com EOAV e FLP pode otimizar o desenvolvimento de programas de prevenção e intervenção para promover a saúde e a integração social destes indivíduos. Este estudo foi delineado com objetivo de verificar e comparar o desempenho em habilidades funcionais quanto ao desempenho nas áreas de autocuidado, mobilidade, função social e nível de independência entre crianças com EOAV, crianças com FLP e um grupo comparativo, de crianças sem anomalias. O modelo de pesquisa foi observacional descritivo transversal com uma casuística de 39 pais/responsáveis de crianças na faixa etária entre três anos e sete anos e seis meses, de ambos os gêneros. Foram convidados para participar pais/responsáveis de crianças em tratamento no Hospital de Reabilitação de Anomalias Craniofaciais da Universidade e São Paulo (HRAC-USP) os quais foram divididos em três grupos: dois experimentais e um grupo comparativo. O instrumento para coleta dos dados das habilidades funcionais foi o Pediatric Evaluation of Disability Inventory (PEDI), em sua versão adaptada para o português. A avaliação é realizada por meio de entrevista com o cuidador, o qual deve saber informar sobre o desempenho da criança em atividades e tarefas típicas da rotina diária. Os dados foram apresentados por análise descritiva com medidas de tendência central (média aritmética), dispersão (desvio-padrão) e distribuição de frequência, nas variáveis: idades, gênero e nível socioeconômico da família e caracterização da casuística. Para as análises das pontuações bruta e normativa do questionário PEDI no que se refere às habilidades funcionais e a assistência do cuidador nas três áreas de função autocuidado, mobilidade e função social, foi utilizado o teste de variância One Way, e para o teste de normalidade foi utilizado Shapiro Wilk para variável dependente. A análise comparativa foi realizada pelo teste de Kruskal-Wallis, adotando-se o valor de significância de p< 0,05. Os resultados deste estudo na análise comparativa nas habilidades funcionais na mobilidade, houve diferença estatisticamente significante na comparação entre os grupos GC vs GEEOAV, no escore bruto, e entre os grupos GC vs GEEOAV e GC vs GEFLP, no escore normativo.Na assistência do cuidador no autocuidado, houve diferença estatisticamente significante na comparação entre os grupos GC vs GEEOAV, no escore normativo. Na assistência do cuidador na mobilidade, houve diferença estatisticamente significante na comparação entre os grupos GC vs GEEOAV nos escores bruto e normativo.Na assistência do cuidador na função social houve diferença estatisticamente significante na comparação entre os grupos GC vs GEFLP.
Resumo:
Reatores tubulares de polimerização podem apresentar um perfil de velocidade bastante distorcido. Partindo desta observação, um modelo estocástico baseado no modelo de dispersão axial foi proposto para a representação matemática da fluidodinâmica de um reator tubular para produção de poliestireno. A equação diferencial foi obtida inserindo a aleatoriedade no parâmetro de dispersão, resultando na adição de um termo estocástico ao modelo capaz de simular as oscilações observadas experimentalmente. A equação diferencial estocástica foi discretizada e resolvida pelo método Euler-Maruyama de forma satisfatória. Uma função estimadora foi desenvolvida para a obtenção do parâmetro do termo estocástico e o parâmetro do termo determinístico foi calculado pelo método dos mínimos quadrados. Uma análise de convergência foi conduzida para determinar o número de elementos da discretização e o modelo foi validado através da comparação de trajetórias e de intervalos de confiança computacionais com dados experimentais. O resultado obtido foi satisfatório, o que auxilia na compreensão do comportamento fluidodinâmico complexo do reator estudado.
Resumo:
Deep brain stimulation (DBS) provides significant therapeutic benefit for movement disorders such as Parkinson’s disease (PD). Current DBS devices lack real-time feedback (thus are open loop) and stimulation parameters are adjusted during scheduled visits with a clinician. A closed-loop DBS system may reduce power consumption and side effects by adjusting stimulation parameters based on patient’s behavior. Thus behavior detection is a major step in designing such systems. Various physiological signals can be used to recognize the behaviors. Subthalamic Nucleus (STN) Local field Potential (LFP) is a great candidate signal for the neural feedback, because it can be recorded from the stimulation lead and does not require additional sensors. This thesis proposes novel detection and classification techniques for behavior recognition based on deep brain LFP. Behavior detection from such signals is the vital step in developing the next generation of closed-loop DBS devices. LFP recordings from 13 subjects are utilized in this study to design and evaluate our method. Recordings were performed during the surgery and the subjects were asked to perform various behavioral tasks. Various techniques are used understand how the behaviors modulate the STN. One method studies the time-frequency patterns in the STN LFP during the tasks. Another method measures the temporal inter-hemispheric connectivity of the STN as well as the connectivity between STN and Pre-frontal Cortex (PFC). Experimental results demonstrate that different behaviors create different m odulation patterns in STN and it’s connectivity. We use these patterns as features to classify behaviors. A method for single trial recognition of the patient’s current task is proposed. This method uses wavelet coefficients as features and support vector machine (SVM) as the classifier for recognition of a selection of behaviors: speech, motor, and random. The proposed method is 82.4% accurate for the binary classification and 73.2% for classifying three tasks. As the next step, a practical behavior detection method which asynchronously detects behaviors is proposed. This method does not use any priori knowledge of behavior onsets and is capable of asynchronously detect the finger movements of PD patients. Our study indicates that there is a motor-modulated inter-hemispheric connectivity between LFP signals recorded bilaterally from STN. We utilize a non-linear regression method to measure this inter-hemispheric connectivity and to detect the finger movements. Our experimental results using STN LFP recorded from eight patients with PD demonstrate this is a promising approach for behavior detection and developing novel closed-loop DBS systems.
Resumo:
The power required to operate large gyratory mills often exceeds 10 MW. Hence, optimisation of the power consumption will have a significant impact on the overall economic performance and environmental impact of the mineral processing plant. In most of the published models of tumbling mills (e.g. [Morrell, S., 1996. Power draw of wet tumbling mills and its relationship to charge dynamics, Part 2: An empirical approach to modelling of mill power draw. Trans. Inst. Mining Metall. (Section C: Mineral Processing Ext. Metall.) 105, C54-C62. Austin, L.G., 1990. A mill power equation for SAG mills. Miner. Metall. Process. 57-62]), the effect of lifter design and its interaction with mill speed and filling are not incorporated. Recent experience suggests that there is an opportunity for improving grinding efficiency by choosing the appropriate combination of these variables. However, it is difficult to experimentally determine the interactions of these variables in a full scale mill. Although some work has recently been published using DEM simulations, it was basically. limited to 2D. The discrete element code, Particle Flow Code 3D (PFC3D), has been used in this work to model the effects of lifter height (525 cm) and mill speed (50-90% of critical) on the power draw and frequency distribution of specific energy (J/kg) of normal impacts in a 5 m diameter autogenous (AG) mill. It was found that the distribution of the impact energy is affected by the number of lifters, lifter height, mill speed and mill filling. Interactions of lifter design, mill speed and mill filling are demonstrated through three dimensional distinct element methods (3D DEM) modelling. The intensity of the induced stresses (shear and normal) on lifters, and hence the lifter wear, is also simulated. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Objective: To examine the frequency distribution of co-existing conditions for deaths where the underlying cause was infectious and parasitic diseases. Materials and methods: Besides the underlying cause of death, the distributions of co-existing conditions for deaths from infectious and parasitic diseases were examined in total and by various age and sex groups, at individual and chapter levels, using 1998 Australian mortality data. Results: In addition to the underlying cause of death, the average number of reported co-existing conditions for a single infectious and parasitic death was 1.62. The most common co-existing conditions were respiratory failure, acute renal failure non-specific causes, ischaemic heart disease, pneumonia and diabetes. When studying the distribution of co-existing conditions at the ICD-9 chapter level, it was found that the circulatory system diseases were the most important. There was an increasing trend in the number of reported co-existing conditions from 60 years of age upwards. Gender differences existed in the frequency of some reported co-existing conditions. The most common organism types of co-existing conditions were other bacterial infection and other viruses. Conclusions: The study indicated that the quality of death certificates is less than satisfactory for the 1998 Australian mortality data. The findings may be helpful in clarifying the ICD coding rules and the development of disease prevention strategies. (C) 2003 International Society for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.