984 resultados para Printed circuit design
Resumo:
La presente tesi tratta la progettazione e la simulazione via software di geometrie di antenne da realizzare direttamente su Printed Circuit Board (PCB) per schede di trasmissione dati wireless Ultra Wide Band. L’obiettivo principale di questo studio è la realizzazione di un prototipo per impieghi biomedici umani (ad esempio trasmissione di dati provenienti da un ECG). Lo scopo del lavoro svolto è quello di trovare la miglior soluzione di integrazione per un’antenna il più possibile compatta da realizzare poi direttamente sul substrato dove verrà stampato il circuito del trasmettitore stesso. L’antenna verrà quindi realizzata esclusivamente attraverso microstrisce conduttrici (le medesime che formeranno i collegamenti tra i vari componenti del circuito) prendendo in considerazione le grandezze parassite di ogni conduttore, quali resistenza, induttanza, capacità ecc. In conclusione, il circuito di trasmissione wireless completo di antenna sopra descritto è attualmente in fase di realizzazione e nel prossimo futuro verrà testato in laboratorio.
Resumo:
Questa Tesi di Laurea si prefigge gli obiettivi di riuscire a caratterizzare i Sensori Hall e di testare un Sensore Hall di Asahi-Kasei, il CQ-3300, di cui l’Università di Bologna è in possesso. Per questa ragione si può dividere il processo di realizzazione della tesi in 2 fasi ben distinte: • Una prima fase dedicata interamente allo studio dell’argomento e alla ricerca online di Sensori Hall presenti sul mercato. Si è dunque approfondito il fenomeno fisico su cui essi basano il proprio funzionamento, le loro caratteristiche principali e le loro applicazioni. Se ne sono poi scelti due, oltre al CQ-3300, tra quelli presenti sul mercato per poterli caratterizzare e confrontare con il suddetto. • Una seconda fase dedicata ai test sul Sensore nel laboratorio di elettronica. Durante questa fase è stato montato su PCB (Printed Circuit Board) il sensore Hall CQ-3300 e sono stati realizzati dei circuiti di prova con lo scopo di verificare il corretto funzionamento del Sensore e l’effettiva banda di funzionamento. I tests in corrente alternata sono stati effettuati grazie all’ausilio di un generatore di corrente in grado di convertire un segnale in tensione in un segnale in corrente. Questo generatore di corrente però non può erogare un segnale in corrente di ampiezza maggiore a 1 Ampere, ragione per cui si è preferito tenersi alla larga da tale valore. L’Università di Bologna ritiene necessario testare questo sensore in termini di banda, in quanto ha progettato un Sensore Hall dalle caratteristiche simili in termini di banda di lavoro al componente in questione, il che rende importante capire se quest’ultimo tiene fede alla banda di lavoro che viene indicata sul suo datasheet, ovvero 1 MHz.
Resumo:
OBJECTIVE: Patient-ventilator synchrony during non-invasive pressure support ventilation with the helmet device is often compromised when conventional pneumatic triggering and cycling-off were used. A possible solution to this shortcoming is to replace the pneumatic triggering with neural triggering and cycling-off-using the diaphragm electrical activity (EA(di)). This signal is insensitive to leaks and to the compliance of the ventilator circuit. DESIGN: Randomized, single-blinded, experimental study. SETTING: University Hospital. PARTICIPANTS AND SUBJECTS: Seven healthy human volunteers. INTERVENTIONS: Pneumatic triggering and cycling-off were compared to neural triggering and cycling-off during NIV delivered with the helmet. MEASUREMENTS AND RESULTS: Triggering and cycling-off delays, wasted efforts, and breathing comfort were determined during restricted breathing efforts (<20% of voluntary maximum EA(di)) with various combinations of pressure support (PSV) (5, 10, 20 cm H(2)O) and respiratory rates (10, 20, 30 breath/min). During pneumatic triggering and cycling-off, the subject-ventilator synchrony was progressively more impaired with increasing respiratory rate and levels of PSV (p < 0.001). During neural triggering and cycling-off, effect of increasing respiratory rate and levels of PSV on subject-ventilator synchrony was minimal. Breathing comfort was higher during neural triggering than during pneumatic triggering (p < 0.001). CONCLUSIONS: The present study demonstrates in healthy subjects that subject-ventilator synchrony, trigger effort, and breathing comfort with a helmet interface are considerably less impaired during increasing levels of PSV and respiratory rates with neural triggering and cycling-off, compared to conventional pneumatic triggering and cycling-off.
Resumo:
In the recent years the missing fourth component, the memristor, was successfully synthesized. However, the mathematical complexity and variety of the models behind this component, in addition to the existence of convergence problems in the simulations, make the design of memristor-based applications long and difficult. In this work we present a memristor model characterization framework which supports the automated generation of subcircuit files. The proposed environment allows the designer to choose and parameterize the memristor model that best suits for a given application. The framework carries out characterizing simulations in order to study the possible non-convergence problems, solving the dependence on the simulation conditions and guaranteeing the functionality and performance of the design. Additionally, the occurrence of undesirable effects related to PVT variations is also taken into account. By performing a Monte Carlo or a corner analysis, the designer is aware of the safety margins which assure the correct device operation.
Resumo:
Dynamic thermal management techniques require a collection of on-chip thermal sensors that imply a significant area and power overhead. Finding the optimum number of temperature monitors and their location on the chip surface to optimize accuracy is an NP-hard problem. In this work we improve the modeling of the problem by including area, power and networking constraints along with the consideration of three inaccuracy terms: spatial errors, sampling rate errors and monitor-inherent errors. The problem is solved by the simulated annealing algorithm. We apply the algorithm to a test case employing three different types of monitors to highlight the importance of the different metrics. Finally we present a case study of the Alpha 21364 processor under two different constraint scenarios.
Resumo:
Evolvable hardware (EH) is an interesting alternative to conventional digital circuit design, since autonomous generation of solutions for a given task permits self-adaptivity of the system to changing environments, and they present inherent fault tolerance when evolution is intrinsically performed. Systems based on FPGAs that use Dynamic and Partial Reconfiguration (DPR) for evolving the circuit are an example. Also, thanks to DPR, these systems can be provided with scalability, a feature that allows a system to change the number of allocated resources at run-time in order to vary some feature, such as performance. The combination of both aspects leads to scalable evolvable hardware (SEH), which changes in size as an extra degree of freedom when trying to achieve the optimal solution by means of evolution. The main contributions of this paper are an architecture of a scalable and evolvable hardware processing array system, some preliminary evolution strategies which take scalability into consideration, and to show in the experimental results the benefits of combined evolution and scalability. A digital image filtering application is used as use case.
Resumo:
The use of ideal equivalent circuits which represent the electrical response of a physical device is a very common practice in microwave and millimeter wave circuit design. However, these equivalent circuit models only get accurate results in a certain frequency range. In this document, some basic devices have been analyzed, with several equivalent circuit models for each one. Physical devices have been manufactured and measured, and measurements have been compared with the responses obtained from the equivalent models. From this comparison, it?s possible to see what equivalent models are more advisable in each case and which are their limitations.
Resumo:
Os equipamentos elétricos e eletrônicos (EEE) possuem uma grande importância na sociedade contemporânea, e estão presentes no cotidiano das pessoas de forma ubíqua. Com o aumento no consumo de EEE juntamente com a obsolescência precoce, surge um novo tipo de resíduo, chamado de Resíduos de Equipamentos Elétricos e Eletrônicos (REEE). Os REEE não devem ser descartados em aterros, pelo risco de contaminação e pelo desperdício de material. Assim, a reciclagem se faz necessária. As placas de circuito impresso (PCI) estão presentes na grande maioria do REEE e contem a maior variedade de metais, incluindo metais valiosos como Au, Ag, Pt e Cu. A complexidade torna a reciclagem destas placas muito difícil. Rotas hidrometalúrgicas tem surgido como uma alternativa mais limpa para o tratamento de PCI, em detrimento aos processos pirometalúrgicos. Nas rotas hidrometalúrgicas, os metais são extraídos pela lixiviação realizada por ácidos ou bases. O efeito ultrassônico tem sido empregado na síntese de novas substâncias e também em alguns casos no tratamento de resíduos. O processo central no uso de ultrassom é a cavitação acústica, capaz de produzir microbolhas na solução com temperatura da ordem de 5000 K e pressão de 500 atm localmente. Além disto, a implosão das bolhas de cavitação em meio heterogêneo causa um jato de solução na superfície, que pode alcançar velocidade de 100 m.s-1. Assim, esta tese tem como objetivo a investigação do efeito do ultrassom sobre PCI obsoletas. Dois efeitos foram investigados: o efeito da cominuição das PCI promovido pela cavitação e a influência da cavitação na lixiviação com ácido sulfúrico de Fe, Al e Ni. Os parâmetros investigados na cominuição foram: tipo de placa, granulometria da placa moída e potência de ultrassom. Os parâmetros de lixiviação avaliados foram: razão sólido-líquido (S/L), concentração de ácido e potência de ultrassom. Também foram realizados ensaios de lixiviação sonicados com meio oxidante. Foi feita a análise cinética para se determinar qual é o controle da reação de lixiviação.
Resumo:
O aumento no consumo mundial de novos aparelhos eletroeletrônicos aliado à redução no tempo de vida útil destes equipamentos tem como principal consequência ao meio ambiente a geração de resíduos. No Brasil, com a instituição da Política Nacional de Resíduos Sólidos, criou-se a obrigatoriedade legal da responsabilidade dos fabricantes pela logística reversa dos equipamentos eletroeletrônicos, incentivando pesquisas para o desenvolvimento dos métodos de reciclagem e tratamento dos materiais descartados. O processo de lixiviação foi avaliado como alternativa à etapa de separação magnética presente nas atuais rotas hidrometalúrgicas para recuperação de metais valiosos de placas de circuito impresso. Para avaliar a composição das placas, foi realizado ensaio de dissolução em água régia. As amostras foram moídas e submetidas a ensaios de lixiviação com ácido sulfúrico nas concentrações de 1 e 2mol/L, às temperaturas de 75ºC, 85ºC e 95ºC, durante 24 horas. Com ácido sulfúrico 2mol/L a 95ºC, o tempo necessário para se obter 100% de extração do ferro foi de 2 horas. Nestas condições, não foi detectada a presença de cobre dissolvido. A cinética da reação é controlada por reação química e obedece a equação .=1(1)3. A energia de ativação aparente do processo equivale a 90kJ/mol.
Resumo:
O processo tradicional de recuperação de metais de resíduos de equipamentos eletroeletrônicos (REEE) geralmente envolve processamento pirometalúrgico. Entretanto, o uso desta tecnologia para processar placas de circuito impresso (PCI) obsoletas pode levar à liberação de dioxinas e furanos, devido à decomposição térmica de retardantes de chama e resinas poliméricas presentes no substrato das placas. Portanto, este trabalho propõe uma rota hidrometalúrgica para recuperação de metais. O comportamento dos metais, com destaque para cobre, zinco e níquel, durante a lixiviação ácida, foi estudado em três temperaturas diferentes (35ºC, 65ºC e 75ºC), com e sem adição de um agente oxidante (peróxido de hidrogênio H2O2). A cinética de dissolução ácida desses metais foi estudada baseada na análise química por ICP-OES (Espectrometria de emissão ótica por plasma acoplado indutivamente) e EDX (Espectroscopia de fluorescência de raios-X por energia dispersiva). O balanço de massa e a análise química indicaram que a etapa de lixiviação sem adição de oxidante é pouco eficaz na extração dos metais, sendo responsável pela dissolução de menos do que 6% do total extraído. A 65ºC e H2SO4 1 mol/L, com adição de 5 mL de H2O2 (30%) a cada quinze minutos e densidade de polpa de 1 g / 10 mL, 98,1% do cobre, 99,9% do zinco e 99,0% do níquel foram extraídos após 4 horas. A cinética de dissolução desses metais é controlada pela etapa da reação química, seguindo, dependendo da temperatura, a equação 1 (1 XB)1/3 = k1.t ou a equação ln (1 XB) = k4.t.
Resumo:
Paper submitted to the 7th International Symposium on Feedstock Recycling of Polymeric Materials (7th ISFR 2013), New Delhi, India, 23-26 October 2013.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.
Resumo:
This paper presents two hybrid genetic algorithms (HGAs) to optimize the component placement operation for the collect-and-place machines in printed circuit board (PCB) assembly. The component placement problem is to optimize (i) the assignment of components to a movable revolver head or assembly tour, (ii) the sequence of component placements on a stationary PCB in each tour, and (iii) the arrangement of component types to stationary feeders simultaneously. The objective of the problem is to minimize the total traveling time spent by the revolver head for assembling all components on the PCB. The major difference between the HGAs is that the initial solutions are generated randomly in HGA1. The Clarke and Wright saving method, the nearest neighbor heuristic, and the neighborhood frequency heuristic are incorporated into HGA2 for the initialization procedure. A computational study is carried out to compare the algorithms with different population sizes. It is proved that the performance of HGA2 is superior to HGA1 in terms of the total assembly time.
Resumo:
It is indisputable that printed circuit boards (PCBs) play a vital role in our daily lives. With the ever-increasing applications of PCBs, one of the crucial ways to increase a PCB manufacturer’s competitiveness in terms of operation efficiency is to minimize the production time so that the products can be introduced to the market sooner. Optimal Production Planning for PCB Assembly is the first book to focus on the optimization of the PCB assembly lines’ efficiency. This is done by: • integrating the component sequencing and the feeder arrangement problems together for both the pick-and-place machine and the chip shooter machine; • constructing mathematical models and developing an efficient and effective heuristic solution approach for the integrated problems for both types of placement machines, the line assignment problem, and the component allocation problem; and • developing a prototype of the PCB assembly planning system. The techniques proposed in Optimal Production Planning for PCB Assembly will enable process planners in the electronics manufacturing industry to improve the assembly line’s efficiency in their companies. Graduate students in operations research can familiarise themselves with the techniques and the applications of mathematical modeling after reading this advanced introduction to optimal production planning for PCB assembly.