874 resultados para correlation-based feature selection
Resumo:
Over the past ten years, the cross-correlation of long-time series of ambient seismic noise (ASN) has been widely adopted to extract the surface-wave part of the Green’s Functions (GF). This stochastic procedure relies on the assumption that ASN wave-field is diffuse and stationary. At frequencies <1Hz, the ASN is mainly composed by surface-waves, whose origin is attributed to the sea-wave climate. Consequently, marked directional properties may be observed, which call for accurate investigation about location and temporal evolution of the ASN-sources before attempting any GF retrieval. Within this general context, this thesis is aimed at a thorough investigation about feasibility and robustness of the noise-based methods toward the imaging of complex geological structures at the local (∼10-50km) scale. The study focused on the analysis of an extended (11 months) seismological data set collected at the Larderello-Travale geothermal field (Italy), an area for which the underground geological structures are well-constrained thanks to decades of geothermal exploration. Focusing on the secondary microseism band (SM;f>0.1Hz), I first investigate the spectral features and the kinematic properties of the noise wavefield using beamforming analysis, highlighting a marked variability with time and frequency. For the 0.1-0.3Hz frequency band and during Spring- Summer-time, the SMs waves propagate with high apparent velocities and from well-defined directions, likely associated with ocean-storms in the south- ern hemisphere. Conversely, at frequencies >0.3Hz the distribution of back- azimuths is more scattered, thus indicating that this frequency-band is the most appropriate for the application of stochastic techniques. For this latter frequency interval, I tested two correlation-based methods, acting in the time (NCF) and frequency (modified-SPAC) domains, respectively yielding esti- mates of the group- and phase-velocity dispersions. Velocity data provided by the two methods are markedly discordant; comparison with independent geological and geophysical constraints suggests that NCF results are more robust and reliable.
Resumo:
Research on speciation and adaptive radiation has flourished during the past decades, yet factors underlying initiation of reproductive isolation often remain unknown. Parasites represent important selective agents and have received renewed attention in speciation research. We review the literature on parasite-mediated divergent selection in context of ecological speciation and present empirical evidence for three nonexclusive mechanisms by which parasites might facilitate speciation: reduced viability or fecundity of immigrants and hybrids, assortative mating as a pleiotropic by-product of host adaptation, and ecologically-based sexual selection. We emphasise the lack of research on speciation continuums, which is why no study has yet made a convincing case for parasite driven divergent evolution to initiate the emergence of reproductive isolation. We also point interest towards selection imposed by single vs. multiple parasite species, conceptually linking this to strength and multifariousness of selection. Moreover, we discuss how parasites, by manipulating behaviour or impairing sensory abilities of hosts, may change the form of selection that underlies speciation. We conclude that future studies should consider host populations at variable stages of the speciation process, and explore recurrent patterns of parasitism and resistance that could pinpoint the role of parasites in imposing the divergent selection that initiates ecological speciation.
Resumo:
Use of microarray technology often leads to high-dimensional and low- sample size data settings. Over the past several years, a variety of novel approaches have been proposed for variable selection in this context. However, only a small number of these have been adapted for time-to-event data where censoring is present. Among standard variable selection methods shown both to have good predictive accuracy and to be computationally efficient is the elastic net penalization approach. In this paper, adaptation of the elastic net approach is presented for variable selection both under the Cox proportional hazards model and under an accelerated failure time (AFT) model. Assessment of the two methods is conducted through simulation studies and through analysis of microarray data obtained from a set of patients with diffuse large B-cell lymphoma where time to survival is of interest. The approaches are shown to match or exceed the predictive performance of a Cox-based and an AFT-based variable selection method. The methods are moreover shown to be much more computationally efficient than their respective Cox- and AFT- based counterparts.
Resumo:
In this thesis, I study skin lesion detection and its applications to skin cancer diagnosis. A skin lesion detection algorithm is proposed. The proposed algorithm is based color information and threshold. For the proposed algorithm, several color spaces are studied and the detection results are compared. Experimental results show that YUV color space can achieve the best performance. Besides, I develop a distance histogram based threshold selection method and the method is proven to be better than other adaptive threshold selection methods for color detection. Besides the detection algorithms, I also investigate GPU speed-up techniques for skin lesion extraction and the results show that GPU has potential applications in speeding-up skin lesion extraction. Based on the skin lesion detection algorithms proposed, I developed a mobile-based skin cancer diagnosis application. In this application, the user with an iPhone installed with the proposed application can use the iPhone as a diagnosis tool to find the potential skin lesions in a persons' skin and compare the skin lesions detected by the iPhone with the skin lesions stored in a database in a remote server.
Resumo:
In this paper, a computer-aided diagnostic (CAD) system for the classification of hepatic lesions from computed tomography (CT) images is presented. Regions of interest (ROIs) taken from nonenhanced CT images of normal liver, hepatic cysts, hemangiomas, and hepatocellular carcinomas have been used as input to the system. The proposed system consists of two modules: the feature extraction and the classification modules. The feature extraction module calculates the average gray level and 48 texture characteristics, which are derived from the spatial gray-level co-occurrence matrices, obtained from the ROIs. The classifier module consists of three sequentially placed feed-forward neural networks (NNs). The first NN classifies into normal or pathological liver regions. The pathological liver regions are characterized by the second NN as cyst or "other disease." The third NN classifies "other disease" into hemangioma or hepatocellular carcinoma. Three feature selection techniques have been applied to each individual NN: the sequential forward selection, the sequential floating forward selection, and a genetic algorithm for feature selection. The comparative study of the above dimensionality reduction methods shows that genetic algorithms result in lower dimension feature vectors and improved classification performance.
Resumo:
The OPERA neutrino experiment is designed to perform the first observation of neutrino oscillations in direct appearance mode in the νμ→ντ channel, via the detection of the τ-leptons created in charged current ντ interactions. The detector, located in the underground Gran Sasso Laboratory, consists of an emulsion/lead target with an average mass of about 1.2 kt, complemented by electronic detectors. It is exposed to the CERN Neutrinos to Gran Sasso beam, with a baseline of 730 km and a mean energy of 17 GeV. The observation of the first ντ candidate event and the analysis of the 2008-2009 neutrino sample have been reported in previous publications. This work describes substantial improvements in the analysis and in the evaluation of the detection efficiencies and backgrounds using new simulation tools. The analysis is extended to a sub-sample of 2010 and 2011 data, resulting from an electronic detector-based pre-selection, in which an additional ντ candidate has been observed. The significance of the two events in terms of a νμ→ντ oscillation signal is of 2.40 σ.
Resumo:
Palaeoclimatic information can be retrieved from the diffusion of the stable water isotope signal during firnification of snow. The diffusion length, a measure for the amount of diffusion a layer has experienced, depends on the firn temperature and the accumulation rate. We show that the estimation of the diffusion length using power spectral densities (PSDs) of the record of a single isotope species can be biased by uncertainties in spectral properties of the isotope signal prior to diffusion. By using a second water isotope and calculating the difference in diffusion lengths between the two isotopes, this problem is circumvented. We study the PSD method applied to two isotopes in detail and additionally present a new forward diffusion method for retrieving the differential diffusion length based on the Pearson correlation between the two isotope signals. The two methods are discussed and extensively tested on synthetic data which are generated in a Monte Carlo manner. We show that calibration of the PSD method with this synthetic data is necessary to be able to objectively determine the differential diffusion length. The correlation-based method proves to be a good alternative for the PSD method as it yields precision equal to or somewhat higher than the PSD method. The use of synthetic data also allows us to estimate the accuracy and precision of the two methods and to choose the best sampling strategy to obtain past temperatures with the required precision. In addition to application to synthetic data the two methods are tested on stable-isotope records from the EPICA (European Project for Ice Coring in Antarctica) ice core drilled in Dronning Maud Land, Antarctica, showing that reliable firn temperatures can be reconstructed with a typical uncertainty of 1.5 and 2 °C for the Holocene period and 2 and 2.5 °C for the last glacial period for the correlation and PSD method, respectively.
Resumo:
Purpose. The purpose of this study was to investigate statistical differences with MR perfusion imaging features that reflect the dynamics of Gadolinium-uptake in MS lesions using dynamic texture parameter analysis (DTPA). Methods. We investigated 51 MS lesions (25 enhancing, 26 nonenhancing lesions) of 12 patients. Enhancing lesions () were prestratified into enhancing lesions with increased permeability (EL+; ) and enhancing lesions with subtle permeability (EL−; ). Histogram-based feature maps were computed from the raw DSC-image time series and the corresponding texture parameters were analyzed during the inflow, outflow, and reperfusion time intervals. Results. Significant differences () were found between EL+ and EL− and between EL+ and nonenhancing inactive lesions (NEL). Main effects between EL+ versus EL− and EL+ versus NEL were observed during reperfusion (mainly in mean and standard deviation (SD): EL+ versus EL− and EL+ versus NEL), while EL− and NEL differed only in their SD during outflow. Conclusion. DTPA allows grading enhancing MS lesions according to their perfusion characteristics. Texture parameters of EL− were similar to NEL, while EL+ differed significantly from EL− and NEL. Dynamic texture analysis may thus be further investigated as noninvasive endogenous marker of lesion formation and restoration.
Resumo:
Current geochronological data on the Okhotsk-Chukotka volcanic belt (OCVB) and relevant problems are discussed. The belt evolution is suggested to be modeled based on 40Ar/39Ar and U-Pb dates more useful in several aspects than common K-Ar or Rb-Sr dates and methods of paleobotanical correlation. Based on new40Ar/39Ar and U-Pb dates obtained for volcanic rocks in the OCVB northern part, the younger (Coniacian) age is established for lower stratigraphic units in the Central Chukotka segment of the belt, and eastward migration of volcanic activity is shown for terminal stages of this structure evolution.
Resumo:
Many diseases have a genetic origin, and a great effort is being made to detect the genes that are responsible for their insurgence. One of the most promising techniques is the analysis of genetic information through the use of complex networks theory. Yet, a practical problem of this approach is its computational cost, which scales as the square of the number of features included in the initial dataset. In this paper, we propose the use of an iterative feature selection strategy to identify reduced subsets of relevant features, and show an application to the analysis of congenital Obstructive Nephropathy. Results demonstrate that, besides achieving a drastic reduction of the computational cost, the topologies of the obtained networks still hold all the relevant information, and are thus able to fully characterize the severity of the disease.
Resumo:
Esta tesis está incluida dentro del campo del campo de Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB), el cual ha adquirido una gran importancia en las comunicaciones inalámbricas de alta tasa de datos en la última década. UWB surgió con el objetivo de satisfacer la creciente demanda de conexiones inalámbricas en interiores y de uso doméstico, con bajo coste y alta velocidad. La disponibilidad de un ancho de banda grande, el potencial para alta velocidad de transmisión, baja complejidad y bajo consumo de energía, unido al bajo coste de implementación, representa una oportunidad única para que UWB se convierta en una solución ampliamente utilizada en aplicaciones de Wireless Personal Area Network (WPAN). UWB está definido como cualquier transmisión que ocupa un ancho de banda de más de 20% de su frecuencia central, o más de 500 MHz. En 2002, la Comisión Federal de Comunicaciones (FCC) definió que el rango de frecuencias de transmisión de UWB legal es de 3.1 a 10.6 GHz, con una energía de transmisión de -41.3 dBm/Hz. Bajo las directrices de FCC, el uso de la tecnología UWB puede aportar una enorme capacidad en las comunicaciones de corto alcance. Considerando las ecuaciones de capacidad de Shannon, incrementar la capacidad del canal requiere un incremento lineal en el ancho de banda, mientras que un aumento similar de la capacidad de canal requiere un aumento exponencial en la energía de transmisión. En los últimos años, s diferentes desarrollos del UWB han sido extensamente estudiados en diferentes áreas, entre los cuales, el protocolo de comunicaciones inalámbricas MB-OFDM UWB está considerado como la mejor elección y ha sido adoptado como estándar ISO/IEC para los WPANs. Combinando la modulación OFDM y la transmisión de datos utilizando las técnicas de salto de frecuencia, el sistema MB-OFDM UWB es capaz de soportar tasas de datos con que pueden variar de los 55 a los 480 Mbps, alcanzando una distancia máxima de hasta 10 metros. Se esperara que la tecnología MB-OFDM tenga un consumo energético muy bajo copando un are muy reducida en silicio, proporcionando soluciones de bajo coste que satisfagan las demandas del mercado. Para cumplir con todas estas expectativas, el desarrollo y la investigación del MBOFDM UWB deben enfrentarse a varios retos, como son la sincronización de alta sensibilidad, las restricciones de baja complejidad, las estrictas limitaciones energéticas, la escalabilidad y la flexibilidad. Tales retos requieren un procesamiento digital de la señal de última generación, capaz de desarrollar sistemas que puedan aprovechar por completo las ventajas del espectro UWB y proporcionar futuras aplicaciones inalámbricas en interiores. Esta tesis se centra en la completa optimización de un sistema de transceptor de banda base MB-OFDM UWB digital, cuyo objetivo es investigar y diseñar un subsistema de comunicación inalámbrica para la aplicación de las Redes de Sensores Inalámbricas Visuales. La complejidad inherente de los procesadores FFT/IFFT y el sistema de sincronización así como la alta frecuencia de operación para todos los elementos de procesamiento, se convierten en el cuello de la botella para el diseño y la implementación del sistema de UWB digital en base de banda basado en MB-OFDM de baja energía. El objetivo del transceptor propuesto es conseguir baja energía y baja complejidad bajo la premisa de un alto rendimiento. Las optimizaciones están realizadas tanto a nivel algorítmico como a nivel arquitectural para todos los elementos del sistema. Una arquitectura hardware eficiente en consumo se propone en primer lugar para aquellos módulos correspondientes a núcleos de computación. Para el procesado de la Transformada Rápida de Fourier (FFT/IFFT), se propone un algoritmo mixed-radix, basado en una arquitectura con pipeline y se ha desarrollado un módulo de Decodificador de Viterbi (VD) equilibrado en coste-velocidad con el objetivo de reducir el consumo energético e incrementar la velocidad de procesamiento. También se ha implementado un correlador signo-bit simple basado en la sincronización del tiempo de símbolo es presentado. Este correlador es usado para detectar y sincronizar los paquetes de OFDM de forma robusta y precisa. Para el desarrollo de los subsitemas de procesamiento y realizar la integración del sistema completo se han empleado tecnologías de última generación. El dispositivo utilizado para el sistema propuesto es una FPGA Virtex 5 XC5VLX110T del fabricante Xilinx. La validación el propuesta para el sistema transceptor se ha implementado en dicha placa de FPGA. En este trabajo se presenta un algoritmo, y una arquitectura, diseñado con filosofía de co-diseño hardware/software para el desarrollo de sistemas de FPGA complejos. El objetivo principal de la estrategia propuesta es de encontrar una metodología eficiente para el diseño de un sistema de FPGA configurable optimizado con el empleo del mínimo esfuerzo posible en el sistema de procedimiento de verificación, por tanto acelerar el periodo de desarrollo del sistema. La metodología de co-diseño presentada tiene la ventaja de ser fácil de usar, contiene todos los pasos desde la propuesta del algoritmo hasta la verificación del hardware, y puede ser ampliamente extendida para casi todos los tipos de desarrollos de FPGAs. En este trabajo se ha desarrollado sólo el sistema de transceptor digital de banda base por lo que la comprobación de señales transmitidas a través del canal inalámbrico en los entornos reales de comunicación sigue requiriendo componentes RF y un front-end analógico. No obstante, utilizando la metodología de co-simulación hardware/software citada anteriormente, es posible comunicar el sistema de transmisor y el receptor digital utilizando los modelos de canales propuestos por IEEE 802.15.3a, implementados en MATLAB. Por tanto, simplemente ajustando las características de cada modelo de canal, por ejemplo, un incremento del retraso y de la frecuencia central, podemos estimar el comportamiento del sistema propuesto en diferentes escenarios y entornos. Las mayores contribuciones de esta tesis son: • Se ha propuesto un nuevo algoritmo 128-puntos base mixto FFT usando la arquitectura pipeline multi-ruta. Los complejos multiplicadores para cada etapa de procesamiento son diseñados usando la arquitectura modificada shiftadd. Los sistemas word length y twiddle word length son comparados y seleccionados basándose en la señal para cuantización del SQNR y el análisis de energías. • El desempeño del procesador IFFT es analizado bajo diferentes situaciones aritméticas de bloques de punto flotante (BFP) para el control de desbordamiento, por tanto, para encontrar la arquitectura perfecta del algoritmo IFFT basado en el procesador FFT propuesto. • Para el sistema de receptor MB-OFDM UWB se ha empleado una sincronización del tiempo innovadora, de baja complejidad y esquema de compensación, que consiste en funciones de Detector de Paquetes (PD) y Estimación del Offset del tiempo. Simplificando el cross-correlation y maximizar las funciones probables solo a sign-bit, la complejidad computacional se ve reducida significativamente. • Se ha propuesto un sistema de decodificadores Viterbi de 64 estados de decisión-débil usando velocidad base-4 de arquitectura suma-comparaselecciona. El algoritmo Two-pointer Even también es introducido en la unidad de rastreador de origen con el objetivo de conseguir la eficiencia en el hardware. • Se han integrado varias tecnologías de última generación en el completo sistema transceptor basebanda , con el objetivo de implementar un sistema de comunicación UWB altamente optimizado. • Un diseño de flujo mejorado es propuesto para el complejo sistema de implementación, el cual puede ser usado para diseños de Cadena de puertas de campo programable general (FPGA). El diseño mencionado no sólo reduce dramáticamente el tiempo para la verificación funcional, sino también provee un análisis automático como los errores del retraso del output para el sistema de hardware implementado. • Un ambiente de comunicación virtual es establecido para la validación del propuesto sistema de transceptores MB-OFDM. Este método es provisto para facilitar el uso y la conveniencia de analizar el sistema digital de basebanda sin parte frontera analógica bajo diferentes ambientes de comunicación. Esta tesis doctoral está organizada en seis capítulos. En el primer capítulo se encuentra una breve introducción al campo del UWB, tanto relacionado con el proyecto como la motivación del desarrollo del sistema de MB-OFDM. En el capítulo 2, se presenta la información general y los requisitos del protocolo de comunicación inalámbrica MBOFDM UWB. En el capítulo 3 se habla de la arquitectura del sistema de transceptor digital MB-OFDM de banda base . El diseño del algoritmo propuesto y la arquitectura para cada elemento del procesamiento está detallado en este capítulo. Los retos de diseño del sistema que involucra un compromiso de discusión entre la complejidad de diseño, el consumo de energía, el coste de hardware, el desempeño del sistema, y otros aspectos. En el capítulo 4, se ha descrito la co-diseñada metodología de hardware/software. Cada parte del flujo del diseño será detallado con algunos ejemplos que se ha hecho durante el desarrollo del sistema. Aprovechando esta estrategia de diseño, el procedimiento de comunicación virtual es llevado a cabo para probar y analizar la arquitectura del transceptor propuesto. Los resultados experimentales de la co-simulación y el informe sintético de la implementación del sistema FPGA son reflejados en el capítulo 5. Finalmente, en el capítulo 6 se incluye las conclusiones y los futuros proyectos, y también los resultados derivados de este proyecto de doctorado. ABSTRACT In recent years, the Wireless Visual Sensor Network (WVSN) has drawn great interest in wireless communication research area. They enable a wealth of new applications such as building security control, image sensing, and target localization. However, nowadays wireless communication protocols (ZigBee, Wi-Fi, and Bluetooth for example) cannot fully satisfy the demands of high data rate, low power consumption, short range, and high robustness requirements. New communication protocol is highly desired for such kind of applications. The Ultra Wideband (UWB) wireless communication protocol, which has increased in importance for high data rate wireless communication field, are emerging as an important topic for WVSN research. UWB has emerged as a technology that offers great promise to satisfy the growing demand for low-cost, high-speed digital wireless indoor and home networks. The large bandwidth available, the potential for high data rate transmission, and the potential for low complexity and low power consumption, along with low implementation cost, all present a unique opportunity for UWB to become a widely adopted radio solution for future Wireless Personal Area Network (WPAN) applications. UWB is defined as any transmission that occupies a bandwidth of more than 20% of its center frequency, or more than 500 MHz. In 2002, the Federal Communications Commission (FCC) has mandated that UWB radio transmission can legally operate in the range from 3.1 to 10.6 GHz at a transmitter power of -41.3 dBm/Hz. Under the FCC guidelines, the use of UWB technology can provide enormous capacity over short communication ranges. Considering Shannon’s capacity equations, increasing the channel capacity requires linear increasing in bandwidth, whereas similar channel capacity increases would require exponential increases in transmission power. In recent years, several different UWB developments has been widely studied in different area, among which, the MB-OFDM UWB wireless communication protocol is considered to be the leading choice and has recently been adopted in the ISO/IEC standard for WPANs. By combing the OFDM modulation and data transmission using frequency hopping techniques, the MB-OFDM UWB system is able to support various data rates, ranging from 55 to 480 Mbps, over distances up to 10 meters. The MB-OFDM technology is expected to consume very little power and silicon area, as well as provide low-cost solutions that can satisfy consumer market demands. To fulfill these expectations, MB-OFDM UWB research and development have to cope with several challenges, which consist of high-sensitivity synchronization, low- complexity constraints, strict power limitations, scalability, and flexibility. Such challenges require state-of-the-art digital signal processing expertise to develop systems that could fully take advantages of the UWB spectrum and support future indoor wireless applications. This thesis focuses on fully optimization for the MB-OFDM UWB digital baseband transceiver system, aiming at researching and designing a wireless communication subsystem for the Wireless Visual Sensor Networks (WVSNs) application. The inherent high complexity of the FFT/IFFT processor and synchronization system, and high operation frequency for all processing elements, becomes the bottleneck for low power MB-OFDM based UWB digital baseband system hardware design and implementation. The proposed transceiver system targets low power and low complexity under the premise of high performance. Optimizations are made at both algorithm and architecture level for each element of the transceiver system. The low-power hardwareefficient structures are firstly proposed for those core computation modules, i.e., the mixed-radix algorithm based pipelined architecture is proposed for the Fast Fourier Transform (FFT/IFFT) processor, and the cost-speed balanced Viterbi Decoder (VD) module is developed, in the aim of lowering the power consumption and increasing the processing speed. In addition, a low complexity sign-bit correlation based symbol timing synchronization scheme is presented so as to detect and synchronize the OFDM packets robustly and accurately. Moreover, several state-of-the-art technologies are used for developing other processing subsystems and an entire MB-OFDM digital baseband transceiver system is integrated. The target device for the proposed transceiver system is Xilinx Virtex 5 XC5VLX110T FPGA board. In order to validate the proposed transceiver system in the FPGA board, a unified algorithm-architecture-circuit hardware/software co-design environment for complex FPGA system development is presented in this work. The main objective of the proposed strategy is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in system verification procedure, so as to speed up the system development period. The presented co-design methodology has the advantages of easy to use, covering all steps from algorithm proposal to hardware verification, and widely spread for almost all kinds of FPGA developments. Because only the digital baseband transceiver system is developed in this thesis, the validation of transmitting signals through wireless channel in real communication environments still requires the analog front-end and RF components. However, by using the aforementioned hardware/software co-simulation methodology, the transmitter and receiver digital baseband systems get the opportunity to communicate with each other through the channel models, which are proposed from the IEEE 802.15.3a research group, established in MATLAB. Thus, by simply adjust the characteristics of each channel model, e.g. mean excess delay and center frequency, we can estimate the transmission performance of the proposed transceiver system through different communication situations. The main contributions of this thesis are: • A novel mixed radix 128-point FFT algorithm by using multipath pipelined architecture is proposed. The complex multipliers for each processing stage are designed by using modified shift-add architectures. The system wordlength and twiddle word-length are compared and selected based on Signal to Quantization Noise Ratio (SQNR) and power analysis. • IFFT processor performance is analyzed under different Block Floating Point (BFP) arithmetic situations for overflow control, so as to find out the perfect architecture of IFFT algorithm based on the proposed FFT processor. • An innovative low complex timing synchronization and compensation scheme, which consists of Packet Detector (PD) and Timing Offset Estimation (TOE) functions, for MB-OFDM UWB receiver system is employed. By simplifying the cross-correlation and maximum likelihood functions to signbit only, the computational complexity is significantly reduced. • A 64 state soft-decision Viterbi Decoder system by using high speed radix-4 Add-Compare-Select architecture is proposed. Two-pointer Even algorithm is also introduced into the Trace Back unit in the aim of hardware-efficiency. • Several state-of-the-art technologies are integrated into the complete baseband transceiver system, in the aim of implementing a highly-optimized UWB communication system. • An improved design flow is proposed for complex system implementation which can be used for general Field-Programmable Gate Array (FPGA) designs. The design method not only dramatically reduces the time for functional verification, but also provides automatic analysis such as errors and output delays for the implemented hardware systems. • A virtual communication environment is established for validating the proposed MB-OFDM transceiver system. This methodology is proved to be easy for usage and convenient for analyzing the digital baseband system without analog frontend under different communication environments. This PhD thesis is organized in six chapters. In the chapter 1 a brief introduction to the UWB field, as well as the related work, is done, along with the motivation of MBOFDM system development. In the chapter 2, the general information and requirement of MB-OFDM UWB wireless communication protocol is presented. In the chapter 3, the architecture of the MB-OFDM digital baseband transceiver system is presented. The design of the proposed algorithm and architecture for each processing element is detailed in this chapter. Design challenges of such system involve trade-off discussions among design complexity, power consumption, hardware cost, system performance, and some other aspects. All these factors are analyzed and discussed. In the chapter 4, the hardware/software co-design methodology is proposed. Each step of this design flow will be detailed by taking some examples that we met during system development. Then, taking advantages of this design strategy, the Virtual Communication procedure is carried out so as to test and analyze the proposed transceiver architecture. Experimental results from the co-simulation and synthesis report of the implemented FPGA system are given in the chapter 5. The chapter 6 includes conclusions and future work, as well as the results derived from this PhD work.
Resumo:
Esta tese de Doutorado procura estudar os métodos teológicos em diálogo, antropologiatranscendental de Karl Rahner e correlação de Paul Tillich, a partir da sistematização da ontologia existencial elaborada pelo jovem Heidegger em Sein und Zeit. Tanto num, como noutro método, o que se discute é a profundidade do ser, na sua possibilidade de aproximação em superação ao que não pode ser dito. Nesse caso, tanto a metafísica tomista, resgatada por Rahner, quanto a secularização protestante e o abismo do ser, enfatizado por Tillich, assumem a impossibilidade de se dizer o conteúdo do sagrado, exatamente por se situarem ou no inconceito da raiz ontológica (Rahner), ou no excesso de sentido do Ultimate Concern (Tillich). Dessas impossibilades, independente se um antes e depois (Rahner), ou se um depois e um antes em profundidade (Tillich), o que se tem é a pergunta ontológica como possibilidade da abertura do ser, nos dois casos sem conteúdo, para que a resposta , que também não responde, seja dada como (im)possibilidade do deslocamento do ser. No resgate da metafísica tomista, em Rahner, há um deslocamento a partir de um sentido, de certo modo linear , daí antropologia-transcendental. Já na leitura protestante de Tillich, o deslocamento se dá por uma dialética , o ser ameaçado pelo não-ser, como uma unidade rompida, daí correlação. A tese é apresentada em quatro capítulos. No primeiro há uma leitura de Heidegger, pela perspectiva da hermenêutica da compreensão. Já no segundo a reflexão se dá em busca da possibilidade da compreensão pela racionalidade ontológica na correlação, Paul Tillich, pela mediação do simbólico. No terceiro, seguindo as mesmas preocupações, se faz a leitura da epistemologia na racionalidade ontológica na antropologia -transcendental, Karl Rahner, pela mediação da pré-apreensão. O quarto capítulo apresenta temas decorrentes, e tradicionais da teologia, em diálogo a partir dos métodos.(AU)
Resumo:
Esta tese de Doutorado procura estudar os métodos teológicos em diálogo, antropologiatranscendental de Karl Rahner e correlação de Paul Tillich, a partir da sistematização da ontologia existencial elaborada pelo jovem Heidegger em Sein und Zeit. Tanto num, como noutro método, o que se discute é a profundidade do ser, na sua possibilidade de aproximação em superação ao que não pode ser dito. Nesse caso, tanto a metafísica tomista, resgatada por Rahner, quanto a secularização protestante e o abismo do ser, enfatizado por Tillich, assumem a impossibilidade de se dizer o conteúdo do sagrado, exatamente por se situarem ou no inconceito da raiz ontológica (Rahner), ou no excesso de sentido do Ultimate Concern (Tillich). Dessas impossibilades, independente se um antes e depois (Rahner), ou se um depois e um antes em profundidade (Tillich), o que se tem é a pergunta ontológica como possibilidade da abertura do ser, nos dois casos sem conteúdo, para que a resposta , que também não responde, seja dada como (im)possibilidade do deslocamento do ser. No resgate da metafísica tomista, em Rahner, há um deslocamento a partir de um sentido, de certo modo linear , daí antropologia-transcendental. Já na leitura protestante de Tillich, o deslocamento se dá por uma dialética , o ser ameaçado pelo não-ser, como uma unidade rompida, daí correlação. A tese é apresentada em quatro capítulos. No primeiro há uma leitura de Heidegger, pela perspectiva da hermenêutica da compreensão. Já no segundo a reflexão se dá em busca da possibilidade da compreensão pela racionalidade ontológica na correlação, Paul Tillich, pela mediação do simbólico. No terceiro, seguindo as mesmas preocupações, se faz a leitura da epistemologia na racionalidade ontológica na antropologia -transcendental, Karl Rahner, pela mediação da pré-apreensão. O quarto capítulo apresenta temas decorrentes, e tradicionais da teologia, em diálogo a partir dos métodos.(AU)
Resumo:
Postprint
Resumo:
Although much of the brain’s functional organization is genetically predetermined, it appears that some noninnate functions can come to depend on dedicated and segregated neural tissue. In this paper, we describe a series of experiments that have investigated the neural development and organization of one such noninnate function: letter recognition. Functional neuroimaging demonstrates that letter and digit recognition depend on different neural substrates in some literate adults. How could the processing of two stimulus categories that are distinguished solely by cultural conventions become segregated in the brain? One possibility is that correlation-based learning in the brain leads to a spatial organization in cortex that reflects the temporal and spatial clustering of letters with letters in the environment. Simulations confirm that environmental co-occurrence does indeed lead to spatial localization in a neural network that uses correlation-based learning. Furthermore, behavioral studies confirm one critical prediction of this co-occurrence hypothesis, namely, that subjects exposed to a visual environment in which letters and digits occur together rather than separately (postal workers who process letters and digits together in Canadian postal codes) do indeed show less behavioral evidence for segregated letter and digit processing.