954 resultados para 090604 Microelectronics and Integrated Circuits
Resumo:
A crescente evolução dos dispositivos contendo circuitos integrados, em especial os FPGAs (Field Programmable Logic Arrays) e atualmente os System on a chip (SoCs) baseados em FPGAs, juntamente com a evolução das ferramentas, tem deixado um espaço entre o lançamento e a produção de materiais didáticos que auxiliem os engenheiros no Co- Projecto de hardware/software a partir dessas tecnologias. Com o intuito de auxiliar na redução desse intervalo temporal, o presente trabalho apresenta o desenvolvimento de documentos (tutoriais) direcionados a duas tecnologias recentes: a ferramenta de desenvolvimento de hardware/software VIVADO; e o SoC Zynq-7000, Z-7010, ambos desenvolvidos pela Xilinx. Os documentos produzidos são baseados num projeto básico totalmente implementado em lógica programável e do mesmo projeto implementado através do processador programável embarcado, para que seja possível avaliar o fluxo de projeto da ferramenta para um projeto totalmente implementado em hardware e o fluxo de projeto para o mesmo projeto implementado numa estrutura de harware/software.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
This thesis provides a conceptual analysis of research literature on teachers' ideology and literacy practices as well as a secondary analysis of three empirical studies and the ways in which the ideologies of the English as an Additional Language (EAL) (Street, 2005) teachers in these contexts impact the teaching of literacy in empowering/disabling ways. Several major theoretical components of Cummins (1996, 2000), Gee (1996, 2004) and Street (1995, 2001) are examined and integrated into a conceptual triad consisting of three main areas: power and ideology, validation of students ' cultural and linguistic backgrounds, and teaching that empowers. This triad provides the framework for the secondary analysis of three empirical studies on the ideologies of secondary EAL teachers. Implications of the findings from the conceptual and secondary analyses are examined in light of the research community and secondary school teachers of EAL.
Resumo:
Motivation to perform and coping with stress during performance are key factors in determining numerous outcomes of sporting performance. However, less evidence is in place assessing their relationship. The aim of this investigation was to assess the relationship between athlete motivation and the coping strategies used to deal with stress during their sporting performance, as well as the relationship between motivation and affect and coping and affect. One hundred and forty five university athletes completed questionnaires. Regressions revealed that two of the three self determined levels of motivation, identified and integrated regulation, predicted increased task-oriented coping strategies. Two of the three non-self determined levels of motivation, amotivation and external regulation, significantly predicted disengagement-oriented coping. Additionally, intrinsic motivation and task-oriented coping predicted increase positive affect. Increased disengagement-oriented coping predicted decreased positive affect. Disengagement-oriented coping significantly predicted increased negative affect. These findings increase understanding of motivations role in predicting athletes coping.
Resumo:
This case study of curriculum at Dubai Women's College (DWC) examines perceptions of international educators who designed and implemented curriculum for female Emirati higher-educational students in the UAE, and sheds light on the complex social, cultural, and religious factors affecting educational practice. Participants were faculty and supervisors, mainly foreign nationals, while students at DWC are exclusively Emirati. Theories prominent in this study are: constructivist learning theory, trans formative curriculum theory, and sociological theory. Change and empowerment theory figure prominently in this study. Findings reveal this unique group of educators understand curriculum theory as a "contextualized" construct and argue that theory and practice must be viewed through an international lens of religious, cultural, and social contexts. As well, the study explores how mandated "standards" in education-in the form of the International English Language Testing System (IEL TS) and integrated, constructivist curriculum, as taught in the Higher Diploma Year 1 program-function as dual curricular emphases in this context. The study found that tensions among these dual emphases existed and were mediated through specific strategies, including the use of authentic texts to mirror the IEL TS examination during in-class activities, and the relevance of curricular tasks.
Resumo:
Intercropping systems are seen as advantageous as they can provide higher crop yield and diversity along with fewer issues related to pests and weeds than monocultures. However, plant interactions in intercropped crop species and between crops and weeds in these systems are still not well understood. The main objective of this study was to investigate interactions between onion (Allium cepa) and yellow wax bean (Phaseolus vulgaris) in monocultures and intercropping with and without the presence of a weed species, either Chenopodium album or Amaranthus hybridus. Another objective of this study was to compare morphological traits of C. album from two different populations (conventional vs. organic farms). Using a factorial randomized block design, both crop species were planted either in monoculture or intercropped with or without the presence of one of the two weeds. The results showed that intercropping onion with yellow wax bean increased the growth of onion but decreased the growth of yellow wax bean when compared to monocultures. The relative yield total (RYT) value was 1.3. Individual aboveground dry weight of both weed species under intercropping was reduced about 5 times when compared to the control. The poor growth of weeds in intercropping might suggest that crop diversification can help resist weed infestations. A common garden experiment indicated that C. album plants from the conventional farm had larger leaf area and were taller than those from the organic farm. This might be associated with specific evolutionary adaptation of weeds to different farming practices. These findings contribute to the fundamental knowledge of crop-crop interactions, crop-weed competition and adaptation of weeds to various conditions. They provide insights for the management of diversified cropping systems and integrated weed management as practices in sustainable agriculture.
Resumo:
Les siliciures métalliques constituent un élément crucial des contacts électriques des transistors que l'on retrouve au coeur des circuits intégrés modernes. À mesure qu'on réduit les dimensions de ces derniers apparaissent de graves problèmes de formation, liés par exemple à la limitation des processus par la faible densité de sites de germination. L'objectif de ce projet est d'étudier les mécanismes de synthèse de siliciures métalliques à très petite échelle, en particulier le NiSi, et de déterminer l’effet de l’endommagement du Si par implantation ionique sur la séquence de phase. Nous avons déterminé la séquence de formation des différentes phases du système Ni-Si d’échantillons possédant une couche de Si amorphe sur lesquels étaient déposés 10 nm de Ni. Celle-ci a été obtenue à partir de mesures de diffraction des rayons X résolue en temps et, pour des échantillons trempés à des températures critiques du processus, l’identité des phases et la composition et la microstructure ont été déterminées par mesures de figures de pôle, spectrométrie par rétrodiffusion Rutherford et microscopie électronique en transmission (TEM). Nous avons constaté que pour environ la moitié des échantillons, une réaction survenait spontanément avant le début du recuit thermique, le produit de la réaction étant du Ni2Si hexagonal, une phase instable à température de la pièce, mélangée à du NiSi. Dans de tels échantillons, la température de formation du NiSi, la phase d’intérêt pour la microélectronique, était significativement abaissée.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
La lithographie et la loi de Moore ont permis des avancées extraordinaires dans la fabrication des circuits intégrés. De nos jours, plusieurs systèmes très complexes peuvent être embarqués sur la même puce électronique. Les contraintes de développement de ces systèmes sont tellement grandes qu’une bonne planification dès le début de leur cycle de développement est incontournable. Ainsi, la planification de la gestion énergétique au début du cycle de développement est devenue une phase importante dans la conception de ces systèmes. Pendant plusieurs années, l’idée était de réduire la consommation énergétique en ajoutant un mécanisme physique une fois le circuit créé, comme par exemple un dissipateur de chaleur. La stratégie actuelle est d’intégrer les contraintes énergétiques dès les premières phases de la conception des circuits. Il est donc essentiel de bien connaître la dissipation d’énergie avant l’intégration des composantes dans une architecture d’un système multiprocesseurs de façon à ce que chaque composante puisse fonctionner efficacement dans les limites de ses contraintes thermiques. Lorsqu’une composante fonctionne, elle consomme de l’énergie électrique qui est transformée en dégagement de chaleur. Le but de ce mémoire est de trouver une affectation efficace des composantes dans une architecture de multiprocesseurs en trois dimensions en tenant compte des limites des facteurs thermiques de ce système.
Resumo:
Il y a des problemes qui semblent impossible a resoudre sans l'utilisation d'un tiers parti honnete. Comment est-ce que deux millionnaires peuvent savoir qui est le plus riche sans dire a l'autre la valeur de ses biens ? Que peut-on faire pour prevenir les collisions de satellites quand les trajectoires sont secretes ? Comment est-ce que les chercheurs peuvent apprendre les liens entre des medicaments et des maladies sans compromettre les droits prives du patient ? Comment est-ce qu'une organisation peut ecmpecher le gouvernement d'abuser de l'information dont il dispose en sachant que l'organisation doit n'avoir aucun acces a cette information ? Le Calcul multiparti, une branche de la cryptographie, etudie comment creer des protocoles pour realiser de telles taches sans l'utilisation d'un tiers parti honnete. Les protocoles doivent etre prives, corrects, efficaces et robustes. Un protocole est prive si un adversaire n'apprend rien de plus que ce que lui donnerait un tiers parti honnete. Un protocole est correct si un joueur honnete recoit ce que lui donnerait un tiers parti honnete. Un protocole devrait bien sur etre efficace. Etre robuste correspond au fait qu'un protocole marche meme si un petit ensemble des joueurs triche. On demontre que sous l'hypothese d'un canal de diusion simultane on peut echanger la robustesse pour la validite et le fait d'etre prive contre certains ensembles d'adversaires. Le calcul multiparti a quatre outils de base : le transfert inconscient, la mise en gage, le partage de secret et le brouillage de circuit. Les protocoles du calcul multiparti peuvent etre construits avec uniquements ces outils. On peut aussi construire les protocoles a partir d'hypoth eses calculatoires. Les protocoles construits a partir de ces outils sont souples et peuvent resister aux changements technologiques et a des ameliorations algorithmiques. Nous nous demandons si l'efficacite necessite des hypotheses de calcul. Nous demontrons que ce n'est pas le cas en construisant des protocoles efficaces a partir de ces outils de base. Cette these est constitue de quatre articles rediges en collaboration avec d'autres chercheurs. Ceci constitue la partie mature de ma recherche et sont mes contributions principales au cours de cette periode de temps. Dans le premier ouvrage presente dans cette these, nous etudions la capacite de mise en gage des canaux bruites. Nous demontrons tout d'abord une limite inferieure stricte qui implique que contrairement au transfert inconscient, il n'existe aucun protocole de taux constant pour les mises en gage de bit. Nous demontrons ensuite que, en limitant la facon dont les engagements peuvent etre ouverts, nous pouvons faire mieux et meme un taux constant dans certains cas. Ceci est fait en exploitant la notion de cover-free families . Dans le second article, nous demontrons que pour certains problemes, il existe un echange entre robustesse, la validite et le prive. Il s'effectue en utilisant le partage de secret veriable, une preuve a divulgation nulle, le concept de fantomes et une technique que nous appelons les balles et les bacs. Dans notre troisieme contribution, nous demontrons qu'un grand nombre de protocoles dans la litterature basee sur des hypotheses de calcul peuvent etre instancies a partir d'une primitive appelee Transfert Inconscient Veriable, via le concept de Transfert Inconscient Generalise. Le protocole utilise le partage de secret comme outils de base. Dans la derniere publication, nous counstruisons un protocole efficace avec un nombre constant de rondes pour le calcul a deux parties. L'efficacite du protocole derive du fait qu'on remplace le coeur d'un protocole standard par une primitive qui fonctionne plus ou moins bien mais qui est tres peu couteux. On protege le protocole contre les defauts en utilisant le concept de privacy amplication .
Resumo:
This paper describes a method for analyzing scoliosis trunk deformities using Independent Component Analysis (ICA). Our hypothesis is that ICA can capture the scoliosis deformities visible on the trunk. Unlike Principal Component Analysis (PCA), ICA gives local shape variation and assumes that the data distribution is not normal. 3D torso images of 56 subjects including 28 patients with adolescent idiopathic scoliosis and 28 healthy subjects are analyzed using ICA. First, we remark that the independent components capture the local scoliosis deformities as the shoulder variation, the scapula asymmetry and the waist deformation. Second, we note that the different scoliosis curve types are characterized by different combinations of specific independent components.
Resumo:
The rapid developments in fields such as fibre optic communication engineering and integrated optical electronics have expanded the interest and have increased the expectations about guided wave optics, in which optical waveguides and optical fibres play a central role. The technology of guided wave photonics now plays a role in generating information (guided-wave sensors) and processing information (spectral analysis, analog-to-digital conversion and other optical communication schemes) in addition to its original application of transmitting information (fibre optic communication). Passive and active polymer devices have generated much research interest recently because of the versatility of the fabrication techniques and the potential applications in two important areas – short distant communication network and special functionality optical devices such as amplifiers, switches and sensors. Polymer optical waveguides and fibres are often designed to have large cores with 10-1000 micrometer diameter to facilitate easy connection and splicing. Large diameter polymer optical fibres being less fragile and vastly easier to work with than glass fibres, are attractive in sensing applications. Sensors using commercial plastic optical fibres are based on ideas already used in silica glass sensors, but exploiting the flexible and cost effective nature of the plastic optical fibre for harsh environments and throw-away sensors. In the field of Photonics, considerable attention is centering on the use of polymer waveguides and fibres, as they have a great potential to create all-optical devices. By attaching organic dyes to the polymer system we can incorporate a variety of optical functions. Organic dye doped polymer waveguides and fibres are potential candidates for solid state gain media. High power and high gain optical amplification in organic dye-doped polymer waveguide amplifier is possible due to extremely large emission cross sections of dyes. Also, an extensive choice of organic dye dopants is possible resulting in amplification covering a wide range in the visible region.
Resumo:
Ceramic dielectrics with high dielectric constant in the microwave frequency range are used as filters, oscillators [I], etc. in microwave integrated circuits (MICs) particularly in modern communication systems like cellular telephones and satellite communications. Such ceramics, known as 'dielectric resonators (DRs),donot only offer miniaturisation and reduce the weight of the microwave components. but also improve the efficiency of MICs
Resumo:
The present investigation on “Coconut Phenology and Yield Response to Climate Variability and Change” was undertaken at the experimental site, at the Regional Station, Coconut Development Board, KAU Campus, Vellanikkara. Ten palms each of eight-year-old coconut cultivars viz., Tiptur Tall, Kuttiadi (WCT), Kasaragod (WCT) and Komadan (WCT) were randomly selected.The study therefore, reinforces our traditional knowledge that the coconut palm is sensitive to changing weather conditions during the period from primordium initiation to harvest of nuts (about 44 months). Absence of rainfall from December to May due to early withdrawal of northeast monsoon, lack of pre monsoon showers and late onset of southwest monsoon adversely affect the coconut productivity to a considerable extent in the following year under rainfed conditions. The productivity can be increased by irrigating the coconut palm during the dry periods.Increase in temperature, aridity index, number of severe summer droughts and decline in rainfall and moisture index were the major factors for a marginal decline or stagnation in coconut productivity over a period of time, though various developmental schemes were in operation for sustenance of coconut production in the State of Kerala. It can be attributed to global warming and climate change. Therefore, there is a threat to coconut productivity in the ensuing decades due to climate variability and change. In view of the above, there is an urgent need for proactive measures as a part of climate change adaptation to sustain coconut productivity in the State of Kerala.The coconut productivity is more vulnerable to climate variability such as summer droughts rather than climate change in terms of increase in temperature and decline in rainfall, though there was a marginal decrease (1.6%) in the decade of 1981-2009 when compared to that of 1951-80. This aspect needs to be examined in detail by coconut development agencies such as Coconut Development Board and State Agriculture Department for remedial measures. Otherwise, the premier position of Kerala in terms of coconut production is likely to be lost in the ensuing years under the projected climate change scenario. Among the four cultivars studied, Tiptur Tall appears to be superior in terms of reproduction phase and nut yield. This needs to be examined by the coconut breeders in their crop improvement programme as a part of stress tolerant under rainfed conditions. Crop mix and integrated farming are supposed to be the best combination to sustain development in the long run under the projected climate change scenarios. Increase in coconut area under irrigation during summer with better crop management and protection measures also are necessary measures to increase coconut productivity since the frequency of intensity of summer droughts is likely to increase under projected global warming scenario.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.