742 resultados para implementations
Resumo:
The primary purpose of this thesis was to present a theoretical large-signal analysis to study the power gain and efficiency of a microwave power amplifier for LS-band communications using software simulation. Power gain, efficiency, reliability, and stability are important characteristics in the power amplifier design process. These characteristics affect advance wireless systems, which require low-cost device amplification without sacrificing system performance. Large-signal modeling and input and output matching components are used for this thesis. Motorola's Electro Thermal LDMOS model is a new transistor model that includes self-heating affects and is capable of small-large signal simulations. It allows for most of the design considerations to be on stability, power gain, bandwidth, and DC requirements. The matching technique allows for the gain to be maximized at a specific target frequency. Calculations and simulations for the microwave power amplifier design were performed using Matlab and Microwave Office respectively. Microwave Office is the simulation software used in this thesis. The study demonstrated that Motorola's Electro Thermal LDMOS transistor in microwave power amplifier design process is a viable solution for common-source amplifier applications in high power base stations. The MET-LDMOS met the stability requirements for the specified frequency range without a stability-improvement model. The power gain of the amplifier circuit was improved through proper microwave matching design using input/output-matching techniques. The gain and efficiency of the amplifier improve approximately 4dB and 7.27% respectively. The gain value is roughly .89 dB higher than the maximum gain specified by the MRF21010 data sheet specifications. This work can lead to efficient modeling and development of high power LDMOS transistor implementations in commercial and industry applications.
Resumo:
Reverberation is caused by the reflection of the sound in adjacent surfaces close to the sound source during its propagation to the listener. The impulsive response of an environment represents its reverberation characteristics. Being dependent on the environment, reverberation takes to the listener characteristics of the space where the sound is originated and its absence does not commonly sounds like “natural”. When recording sounds, it is not always possible to have the desirable characteristics of reverberation of an environment, therefore methods for artificial reverberation have been developed, always seeking a more efficient implementations and more faithful to the real environments. This work presents an implementation in FPGAs (Field Programmable Gate Arrays ) of a classic digital reverberation audio structure, based on a proposal of Manfred Schroeder, using sets of all-pass and comb filters. The developed system exploits the use of reconfigurable hardware as a platform development and implementation of digital audio effects, focusing on the modularity and reuse characteristics
Resumo:
This work proposes the use of the behavioral model of the hysteresis loop of the ferroelectrics capacitor as a new alternative to the usually costly techniques in the computation of nonlinear functions in artificial neurons implemented on reconfigurable hardware platform, in this case, a FPGA device. Initially the proposal has been validated by the implementation of the boolean logic through the digital models of two artificial neurons: the Perceptron and a variation of the model Integrate and Fire Spiking Neuron, both using the model also digital of the hysteresis loop of the ferroelectric capacitor as it’s basic nonlinear unit for the calculations of the neurons outputs. Finally, it has been used the analog model of the ferroelectric capacitor with the goal of verifying it’s effectiveness and possibly the reduction of the number of necessary logic elements in the case of implementing the artificial neurons on integrated circuit. The implementations has been carried out by Simulink models and the synthesizing has been done through the DSP Builder software from Altera Corporation.
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.
Resumo:
Management and innovation are two words that come together in the organizational context for the success of firms. Innovation has become an essential component for tourism and convention centers. Considering the impact of the meetings centers in the tourism sector and constant innovation as a competitive advantage strategy of the last decade, this study used the innovation management as a key to changing challenge in the center of Natal conventions. Factors in innovation management for meetings centers were investigated by means of information in organizational and technological structure of the equipment such as the state of the art innovation. The main aspects of innovation management reflecting in the meetings industry of construction is the ability to create and change to the conference and convention centers. It is a descriptive exploratory qualitative research with case study method, which relate to innovation management and tourism to Natal and Fortaleza conventions centers. The results show that there is still no at convention center of Natal , a satisfactory innovation management level, for the planning, leadership, skills, strategies and implementations for innovative processes in our times are absent.
Resumo:
Management and innovation are two words that come together in the organizational context for the success of firms. Innovation has become an essential component for tourism and convention centers. Considering the impact of the meetings centers in the tourism sector and constant innovation as a competitive advantage strategy of the last decade, this study used the innovation management as a key to changing challenge in the center of Natal conventions. Factors in innovation management for meetings centers were investigated by means of information in organizational and technological structure of the equipment such as the state of the art innovation. The main aspects of innovation management reflecting in the meetings industry of construction is the ability to create and change to the conference and convention centers. It is a descriptive exploratory qualitative research with case study method, which relate to innovation management and tourism to Natal and Fortaleza conventions centers. The results show that there is still no at convention center of Natal , a satisfactory innovation management level, for the planning, leadership, skills, strategies and implementations for innovative processes in our times are absent.
Resumo:
O presente trabalho investiga a implantação do regime de progressão continuada nas escolas públicas do estado de São Paulo em 1998, de modo que tem como eixo de pesquisa e reflexões a política pública progressão continuada e seu processo de implantação e implementação. Houve o uso de duas linhas de pesquisa: pesquisa bibliográfica e pesquisa e análise do discurso oficial, não somente aquele que implanta o regime citado, mas também a gradação das leis e suas características. O suporte central de pesquisa apoia-se em duas consagradas obras: “A estrutura das revoluções científicas” e “A origem das espécies”, de Thomas Kuhn e Charles Darwin, respectivamente. As obras citadas farão jus ao título desse trabalho, a qual utiliza das discussões propostas por Kuhn sobre ‘crise’, tendo esta como uma das linhas mestras para analisar os períodos pré e pós implantação do regime combinado ao darwinismo, que aqui se denomina darwinismo pedagógico. Para estabelecer uma conexão entre o objeto central de pesquisa e as obras acima citadas, houve a necessidade de pesquisar e discutir temáticas diretamente relacionadas, como ‘um rio e seus afluentes’. Os ‘afluentes’ pesquisados e discutidos foram: pedagogia e ciência, regime de seriação, darwinismo, metáfora, políticas públicas, gradação das leis, identidade, resistência e desistência. Os ‘afluentes’ não ficaram restritos a pesquisa bibliográfica, houve a necessidade de também no discurso oficial realizar esta linha metodológica. A pesquisa revelou que a partir das contribuições de Kuhn, a implantação do regime de progressão continuada nas escolas públicas do estado de São Paulo apenas fez com que a educação no estado saísse de uma crise e entrasse em outra. Além disso, revelou também que o darwinismo pedagógico que imperava no regime de seriação, muda de face no regime de progressão continuada, porém continua ativo, agora afetando diretamente os docentes, que resistem ativamente ou em oposição, ou desistem, seja de forma anunciada ou velada.
Resumo:
In this work, we have proposed and applied a methodology for teaching electromagnetism, based on an experimental activity and designed in an investigative teaching model, and containing a high degree of dialogism among teachers and students. We have used the discovery of the electron as a generator theme and a remote experiment to determine the charge-to-mass ratio of the electron as an educational resource. Our analyses indicate favorably towards the promotion of ways of appropriation of knowledge by the student, very different from those perceived in traditional expositive classes. Similarly, we find that the presence of a technological resource and an experimental activity create new posture of the teacher in the classroom, probably caused by the unpredictability of the results from the use of such resources. A challenge that we still need to solve is how to engage students in extra classroom tasks, since learning is not only effective in time for classes. We also present the weaknesses detected in our methodological proposal as well as implementations necessary in order to continue the validation process of this methodology.
Resumo:
L’intégration des technologies de l’information et de la communication (TIC) en contexte éducatif représente un moyen concret d’action et de réflexion en sciences de l’éducation. Les scientifiques et les acteurs de terrain se questionnent sur l’intégration des technologies et sur les moyens à mettre en place afin de réussir ce processus parfois complexe. De fait, la pénétration des outils technologiques dans les établissements scolaires a été exponentielle ces dernières années. Il est aujourd’hui nécessaire de comprendre selon quelles perspectives ces outils s’intègrent en salle de classe. Un exemple marquant est celui de la tablette tactile, récemment intégrée massivement dans les écoles d’Amérique du Nord et d’Europe. Cet outil, relativement récent dans la sphère scolaire, demande une réflexion précise vis-à-vis des pratiques pédagogiques des enseignants et des processus d’intégration inhérents. Afin de répondre à ces questionnements, nous avons élaboré une recherche en trois temps. Dans un premier temps, nous avons dressé un portrait exhaustif des pratiques pédagogiques des enseignants utilisant quotidiennement la tablette tactile en salle de classe. Ce portrait nous permet d’esquisser une synthèse des usages et réalités pédagogiques qui entourent cet outil. Dans un deuxième temps, nous avons répertorié, analysé et classifié les modèles d’intégration des TIC présents dans la littérature. L’analyse de ces modèles nous a permis d’en extraire les forces et les lacunes intrinsèques. Ensuite, nous avons créé un modèle synthèse rassemblant les réflexions issues de ces analyses. En parallèle, nous avons créé une typologie permettant d’identifier et de classifier ces modèles. Dans un troisième temps, nous sommes partis des pratiques pédagogiques des enseignants et du modèle général d’intégration des TIC que nous avons conçu afin de comprendre quel était le processus d’intégration de la tablette en salle de classe. Les résultats obtenus mettent en évidence que l’utilisation de la tablette induit des usages pédagogiques novateurs qui facilitent l’enseignement et qui favorisent l’apprentissage des élèves. Cependant, nous constatons que la tablette n’est pas utilisée à son plein potentiel et que certains usages devraient être envisagés selon une perspective plus efficiente et adaptée. En ce qui concerne les processus d’intégration, nous avons identifié plusieurs éléments indispensables: ces processus doivent être itératifs et constructifs, des facteurs internes et externes doivent être considérés et des niveaux d’intégration doivent être identifiés. Le modèle ainsi conçu spécifie le modèle à privilégier et les aboutissants à considérer. À la suite de cette étape, nous avons conçu un modèle d’intégration spécifiquement dédié à la tablette. Celui-ci met en évidence, au-delà des caractéristiques définies dans le modèle général, une nécessaire formation, une implication des acteurs, un ajustement constant des pratiques pédagogiques et une itération indispensable. À la suite de ces considérations, nous constatons que le processus d’appropriation de la tablette est en cours de construction et que les nouvelles implantations, comme les existantes, doivent procéder à une analyse minutieuse des tenants et aboutissants des pratiques médiées par l’intégration de l’outil. En fin de document, une synthèse des résultats et des recommandations est proposée afin de favoriser l’intégration de la tablette tactile - et des TIC en général – dans la salle de classe.
Resumo:
Nesta dissertação apresentamos um trabalho de desenvolvimento e utilização de pulsos de radiofreqüência modulados simultaneamente em freqüência, amplitude e fase (pulsos fortemente modulados, SMP, do inglês Strongly Modulated Pulses) para criar estados iniciais e executar operações unitárias que servem como blocos básicos para processamento da informação quântica utilizando Ressonância Magnética Nuclear (RMN). As implementações experimentais foram realizas em um sistema de 3 q-bits constituído por spins nucleares de Césio 133 (spin nuclear 7/2) em uma amostra de cristal líquido em fase nemática. Os pulsos SMP´s foram construídos teoricamente utilizando um programa especialmente desenvolvido para esse fim, sendo o mesmo baseado no processo de otimização numérica Simplex Nelder-Mead. Através deste programa, os pulsos SMP foram otimizados de modo a executarem as operações lógicas desejadas com durações consideravelmente menores que aquelas realizadas usando o procedimento usual de RMN, ou seja, seqüências de pulsos e evoluções livres. Isso tem a vantagem de reduzir os efeitos de descoerência decorrentes da relaxação do sistema. Os conceitos teóricos envolvidos na criação dos SMPs são apresentados e as principais dificuldades (experimentais e teóricas) que podem surgir devido ao uso desses procedimentos são discutidas. Como exemplos de aplicação, foram produzidos os estados pseudo-puros usados como estados iniciais de operações lógicas em RMN, bem como operações lógicas que foram posteriormente aplicadas aos mesmos. Utilizando os SMP\'s também foi possível realizar experimentalmente os algoritmos quânticos de Grover e Deutsch-Jozsa para 3 q-bits. A fidelidade das implementações experimentais foi determinadas utilizando as matrizes densidade experimentais obtidas utilizando um método de tomografia da matriz densidade previamente desenvolvido.
Resumo:
Object-oriented design and object-oriented languages support the development of independent software components such as class libraries. When using such components, versioning becomes a key issue. While various ad-hoc techniques and coding idioms have been used to provide versioning, all of these techniques have deficiencies - ambiguity, the necessity of recompilation or re-coding, or the loss of binary compatibility of programs. Components from different software vendors are versioned at different times. Maintaining compatibility between versions must be consciously engineered. New technologies such as distributed objects further complicate libraries by requiring multiple implementations of a type simultaneously in a program. This paper describes a new C++ object model called the Shared Object Model for C++ users and a new implementation model called the Object Binary Interface for C++ implementors. These techniques provide a mechanism for allowing multiple implementations of an object in a program. Early analysis of this approach has shown it to have performance broadly comparable to conventional implementations.
Resumo:
Energy efficiency and user comfort have recently become priorities in the Facility Management (FM) sector. This has resulted in the use of innovative building components, such as thermal solar panels, heat pumps, etc., as they have potential to provide better performance, energy savings and increased user comfort. However, as the complexity of components increases, the requirement for maintenance management also increases. The standard routine for building maintenance is inspection which results in repairs or replacement when a fault is found. This routine leads to unnecessary inspections which have a cost with respect to downtime of a component and work hours. This research proposes an alternative routine: performing building maintenance at the point in time when the component is degrading and requires maintenance, thus reducing the frequency of unnecessary inspections. This thesis demonstrates that statistical techniques can be used as part of a maintenance management methodology to invoke maintenance before failure occurs. The proposed FM process is presented through a scenario utilising current Building Information Modelling (BIM) technology and innovative contractual and organisational models. This FM scenario supports a Degradation based Maintenance (DbM) scheduling methodology, implemented using two statistical techniques, Particle Filters (PFs) and Gaussian Processes (GPs). DbM consists of extracting and tracking a degradation metric for a component. Limits for the degradation metric are identified based on one of a number of proposed processes. These processes determine the limits based on the maturity of the historical information available. DbM is implemented for three case study components: a heat exchanger; a heat pump; and a set of bearings. The identified degradation points for each case study, from a PF, a GP and a hybrid (PF and GP combined) DbM implementation are assessed against known degradation points. The GP implementations are successful for all components. For the PF implementations, the results presented in this thesis find that the extracted metrics and limits identify degradation occurrences accurately for components which are in continuous operation. For components which have seasonal operational periods, the PF may wrongly identify degradation. The GP performs more robustly than the PF, but the PF, on average, results in fewer false positives. The hybrid implementations, which are a combination of GP and PF results, are successful for 2 of 3 case studies and are not affected by seasonal data. Overall, DbM is effectively applied for the three case study components. The accuracy of the implementations is dependant on the relationships modelled by the PF and GP, and on the type and quantity of data available. This novel maintenance process can improve equipment performance and reduce energy wastage from BSCs operation.
Resumo:
Encryption and integrity trees guard against phys- ical attacks, but harm performance. Prior academic work has speculated around the latency of integrity verification, but has done so in an insecure manner. No industrial implementations of secure processors have included speculation. This work presents PoisonIvy, a mechanism which speculatively uses data before its integrity has been verified while preserving security and closing address-based side-channels. PoisonIvy reduces per- formance overheads from 40% to 20% for memory intensive workloads and down to 1.8%, on average.
Resumo:
Photoacoustic tomography (PAT) is an emerging imaging modality that shows great potential for preclinical research and clinical practice. As a hybrid technique, PAT is based on the acoustic detection of optical absorption from either endogenous chromophores, such as oxy-hemoglobin and deoxy-hemoglobin, or exogenous contrast agents, such as organic dyes and nanoparticles. Because ultrasound scatters much less than light in tissue, PAT generates high-resolution images in both the optical ballistic and diffusive regimes. Over the past decade, the photoacoustic technique has been evolving rapidly, leading to a variety of exciting discoveries and applications. This review covers the basic principles of PAT and its different implementations. Strengths of PAT are highlighted, along with the most recent imaging results.
Resumo:
This paper reports the results of the on-body experimental tests of a set of four planar differential antennas, originated by design variations of radiating elements with the same shape and characterized by the potential for covering wide and narrow bands. All the antenna designs have been implemented on low-cost FR4 substrate and characterized experimentally through on-body measurements. The results show the impact of the proximity to the human body on antenna performance and the opportunities in terms of potential coverage of wide and narrow bands for future ad hoc designs and implementations through wearable substrates targeting on-body and off-body communication and sensing applications.