921 resultados para ChIp-chip
Resumo:
In this report, we describe the microfabrication and integration of planar electrodes for contactless conductivity detection on polyester-toner (PT) electrophoresis microchips using toner masks. Planar electrodes were fabricated by three simple steps: (i) drawing and laser-printing the electrode geometry on polyester films, (ii) sputtering deposition onto substrates, and (iii) removal of toner layer by a lift-off process. The polyester film with anchored electrodes was integrated to PT electrophoresis microchannels by lamination at 120 degrees C in less than 1 min. The electrodes were designed in an antiparallel configuration with 750 mu m width and 750 gm gap between them. The best results were recorded with a frequency of 400 kHz and 10 V-PP using a sinusoidal wave. The analytical performance of the proposed microchip was evaluated by electrophoretic separation of potassium, sodium and lithium in 150 mu m wide x 6 mu m deep microchannels. Under an electric field of 250 V/cm the analytes were successfully separated in less than 90 s with efficiencies ranging from 7000 to 13 000 plates. The detection limits (S/N = 3) found for K+, Na+, and Li+ were 3.1, 4.3, and 7.2 mu mol/L, respectively. Besides the low-cost and instrumental simplicity, the integrated PT chip eliminates the problem of manual alignment and gluing of the electrodes, permitting more robustness and better reproducibility, therefore, more suitable for mass production of electrophoresis microchips.
Resumo:
In this report, we describe a rapid and reliable process to bond channels fabricated in glass substrates. Glass channels were fabricated by photolithography and wet chemical etching. The resulting channels were bonded against another glass plate containing a 50-mu m thick PDMS layer. This same PDMS layer was also used to provide the electrical insulation of planar electrodes to carry out capacitively coupled contactless conductivity detection. The analytical performance of the proposed device was shown by using both LIF and capacitively coupled contactless conductivity detection systems. Efficiency around 47 000 plates/m was achieved with good chip-to-chip repeatability and satisfactory long-term stability of EOF. The RSD for the EOF measured in three different devices was ca. 7%. For a chip-to-chip comparison, the RSD values for migration time, electrophoretic current and peak area were below 10%. With the proposed approach, a single chip can be fabricated in less than 30 min including patterning, etching and sealing steps. This fabrication process is faster and easier than the thermal bonding process. Besides, the proposed method does not require high temperatures and provides excellent day-to-day and device-to-device repeatability.
Resumo:
A variety of substrates have been used for fabrication of microchips for DNA extraction, PCR amplification, and DNA fragment separation, including the more conventional glass and silicon as well as alternative polymer-based materials. Polyester represents one such polymer, and the laser-printing of toner onto polyester films has been shown to be effective for generating polyester-toner (PeT) microfluidic devices with channel depths on the order of tens of micrometers. Here, we describe a novel and simple process that allows for the production of multilayer, high aspect-ratio PeT microdevices with substantially larger channel depths. This innovative process utilizes a CO(2) laser to create the microchannel in polyester sheets containing a uniform layer of printed toner, and multilayer devices can easily be constructed by sandwiching the channel layer between uncoated cover sheets of polyester containing precut access holes. The process allows the fabrication of deep channels, with similar to 270 mu m, and we demonstrate the effectiveness of multilayer PeT microchips for dynamic solid phase extraction (dSPE) and PCR amplification. With the former, we found that (i) more than 65% of DNA from 0.6 mu L of blood was recovered, (ii) the resultant DNA was concentrated to greater than 3 ng/mu L., (which was better than other chip-based extraction methods), and (iii) the DNA recovered was compatible with downstream microchip-based PCR amplification. Illustrative of the compatibility of PeT microchips with the PCR process, the successful amplification of a 520 bp fragment of lambda-phage DNA in a conventional thermocycler is shown. The ability to handle the diverse chemistries associated with DNA purification and extraction is a testimony to the potential utility of PeT microchips beyond separations and presents a promising new disposable platform for genetic analysis that is low cost and easy to fabricate.
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
Este trabalho apresenta um método para detectar falhas no funcionamento de máquinas rotativas baseado em alterações no padrão de vibração do sistema e no diagnóstico da condição de operação, por Lógica Fuzzy. As modificações ocorridas são analisadas e servem como parâmetros para predizer falhas incipientes bem como a evolução destas na condição de operação, possibilitando tarefas de manutenção preditiva. Utiliza-se uma estrutura mecânica denominada de Sistema Rotativo (Figura 1), apropriada para as simulações das falhas. Faz-se a aquisição de dados de vibração da máquina usando-se um acelerômetro em chip biaxial de baixa potência. As saídas são lidas diretamente por um contador microprocessador não requerendo um conversor A/D. Um sistema de desenvolvimento para processamento digital de sinais, baseado no microprocessador TMS320C25, o Psi25, é empregado na aquisição dos sinais de vibração (*.dat), do Sistema Rotativo. Os arquivos *.dat são processados através da ferramenta matemática computacional Matlab 5 e do programa SPTOOL. Estabelece-se o padrão de vibração, denominado assinatura espectral do Sistema Rotativo (Figura 2) Os dados são analisados pelo sistema especialista Fuzzy, devidamente calibrado para o processo em questão. São considerados, como parâmetros para a diferenciação e tomada de decisão no diagnóstico do estado de funcionamento pelo sistema especialista, a freqüência de rotação do eixo-volante e as amplitudes de vibração inerentes a cada situação de avaria. As falhas inseridas neste trabalho são desbalanceamentos no eixovolante (Figura 1), através da inserção de elementos desbalanceadores. A relação de massa entre o volante e o menor elemento desbalanceador é de 1:10000. Tomando-se como alusão o conhecimento de especialistas no que se refere a situações normais de funcionamento e conseqüências danosas, utilizam-se elementos de diferentes massas para inserir falhas e diagnosticar o estado de funcionamento pelo sistema fuzzy, que apresenta o diagnóstico de formas qualitativa: normal; falha incipiente; manutenção e perigo e quantitativa, sendo desta maneira possível a detecção e o acompanhamento da evolução da falha.
Resumo:
With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.
Resumo:
Parte do processo de descentralização iniciado no Brasil no fim da década de 1980, o movimento por uma maior permeabilidade do Estado resultou na definição de Conselhos Gestores de Políticas como peças centrais para as políticas sociais em todo o país. No entanto, a heterogeneidade brasileira de proporções continentais solicita a adequação e o ajuste de políticas públicas e instituições para responderem àsdiversas realidades locais. Este trabalho foi realizado a partir de um estudo de caso exploratório que busca responder sobre o alcance dos Conselhos como arena para discussão, encaminhamento de demandas e solução de problemas da população que habita o espaço considerado “rural” de Juruti, município amazônico repleto de especificidades e desafios comuns à região. Assim, apesar de observarmos a existência de grupos, associações e espaços similares nas comunidades para discutir necessidades e endereçar as demandas por meio da atuação engajada de lideranças locais de diversos perfis, o que se percebe é que muitos desconhecem caminhos já institucionalizados (como os Conselhos Municipais) para encaminhar suas demandas. Com um movimento alheio aos espaços legalmente constituídos, as comunidades perdem a oportunidade de participar mais ativamente da formação de agenda do município, além de ficarem de fora dos mecanismos de controle social e acesso a recursos públicos. A realização de suas demandas se transforma em moeda de troca em vez de direitos reivindicados satisfeitos, se transforma em conquistas que perdem o papel simbólico no fortalecimento organizativo das comunidades.Em uma realidade rural amazônica como Juruti, parece fundamental discutir o chamado “trabalho de base”, o que implica em considerar variáveis de extrema relevância de custo e tempo de deslocamento, além da necessária regionalização dos interesses e demandas. Percebe-se imprescindível também a rediscussão sobre os parâmetros para a definição do “rural” no país, de forma a incluir os diversos Brasis e viabilizar diagnósticos que possibilitem que particularidades como as amazônicas se reflitam em políticas públicas, em modelos de gestão municipal e em espaços participativos adequados, que dialoguem com o local.
Resumo:
As modernas aplicações em diversas áreas como multimídia e telecomunicações exigem arquiteturas que ofereçam altas taxas de processamento. Entretanto, os padrões e algoritmos mudam com incrível rapidez o que gera a necessidade de que esses sistemas digitais tenham também por característica uma grande flexibilidade. Dentro desse contexto, tem-se as arquiteturas reconfiguráveis em geral e, mais recentemente, os sistemas reconfiguráveis em um único chip como soluções adequadas que podem oferecer desempenho, sendo, ao mesmo tempo, adaptáveis a novos problemas e a classes mais amplas de algoritmos dentro de um dado escopo de aplicação. Este trabalho apresenta o estado-da-arte em relação a arquiteturas reconfiguráveis nos meios acadêmcio e industrial e descreve todas as etapas de desenvolvimento do processador de imagens reconfigurável DRIP (Dynamically Reconfigurable Image Processor), desde suas origens como um processador estático até sua última versão reconfigurável em tempo de execução. O DRIP possui um pipeline composto por 81 processadores elementares. Esses processadores constituem a chave do processo de reconfiguração e possuem a capacidade de computar um grande número de algoritmos de processamento de imagens, mais específicamente dentro do domínio da filtragem digital de imagens. Durante o projeto, foram desenvolvidos uma série de modelos em linguagem de descrição de hardware da arquitetura e também ferramentas de software para auxiliar nos processos de implementação de novos algorimos, geração automática de modelos VHDL e validação das implementações. O desenvolvimento de mecanismos com o objetivo de incluir a possibilidade de reconfiguração dinâmica, naturalmente, introduz overheads na arquitetura. Contudo, o processo de reconfiguração do DRIP-RTR é da ordem de milhões de vezes mais rápido do que na versão estaticamente reconfigurável implementada em FPGAs Altera. Finalizando este trabalho, é apresentado o projeto lógico e elétrico do processador elementar do DRIP, visando uma futura implementação do sistema diretamente como um circuito VLSI.
Resumo:
Este trabalho analisa a formação de gabinetes no Governo do Estado do Espírito Santo no período 1995-2014. Para isso, parte-se do debate em torno do presidencialismo de coalizão brasileiro e suas aplicações ao nível subnacional, reforçando a importância de estudos de caso e estudos comparativos. Um resgaste da trajetória política do Espírito Santo é feita, ressaltando o período de crise na década de 1990 e a virada institucional que se deu no começo dos anos 2000. A composição da Assembleia Legislativa no período também é destacada, dada a sua importância para o entendimento das relações entre o Executivo e o Legislativo. Foi construída uma base de dados com todos os Secretários de Estado do período, além de suas respectivas filiações partidárias, de acordo com dados do Tribunal Superior Eleitoral (TSE). Assim, pode-se comparar a composição partidária do gabinete e o tamanho das bancadas partidárias no Legislativo. Para análise da proporcionalidade dos gabinetes este estudo utiliza a Taxa de Coalescência de Amorim Neto (2000) e a aplicação do Índice G sugerido por Avelino, Biderman e Silva (2011). Além da tradicional utilização da filiação partidária dos secretários como proxy para a determinação de um elemento político no gabinete, há ainda a proposição e aplicação paralela de um novo critério que considera a filiação partidária e a ocorrência de candidatura prévia como indicativo de um secretário político. Os dois critérios utilizados mostram resultados diferenciados, e o fato de a maioria dos gabinetes formados não terem sido majoritários sugere que no Espírito Santo a distribuição de cargos no primeiro escalão de governo não seja a principal moeda de troca nos acordos entre Executivo e Legislativo.
Resumo:
This research studies the sintering of ferritic steel chips from the machining process. Were sintered metal powder obtained from machining process chips for face milling of a ferritic steel. The chip was produced by machining and characterized by SEM and EDS, and underwent a process of high energy mill powder characterized also by SEM and EDS. Were constructed three types of matrixes for uniaxial compression (relation l / d greater than 2.5). The differences in the design of the matrixes were essentially in the direction of load application, which for cylindrical case axial direction, while for the rectangular arrays, the longer side. Two samples were compressed with different geometries, a cylindrical and rectangular with the same compaction pressure of 700 MPa. The samples were sintered in a vacuum resistive furnace, heating rate 20 °C / min., isotherm 1300 °C for 60 minutes, and cooling rate of 25 °C / min to room temperature. The starting material of the rectangular sample was further annealed up to temperature of 800 ° C for 30 min. Sintered samples were characterized by scanning electron microscopy, optical microscopy and EDS. The sample compressed in the cylindrical matrix did not show a regular density reflecting in the sintered microstructure revealed by the irregular geometry of the pores, characterizing that the sintering was not complete, reaching only the second phase. As for the specimen compacted in the rectangular array, the analysis performed by scanning electron microscopy, optical microscopy and EDS indicate a good densification, and homogeneous microstructure in their full extent. Additionally, the EDS analyzes indicate no significant changes in chemical composition in the process steps. Therefore, it is concluded that recycling of chips, from the processed ferritic steel is feasible by the powder metallurgy. It makes possible rationalize raw material and energy by manufacture of known properties components from chips generated by the machining process, being benefits to the environment
Resumo:
In this work we developed a computer simulation program for physics porous structures based on programming language C + + using a Geforce 9600 GT with the PhysX chip, originally developed for video games. With this tool, the ability of physical interaction between simulated objects is enlarged, allowing to simulate a porous structure, for example, reservoir rocks and structures with high density. The initial procedure for developing the simulation is the construction of porous cubic structure consisting of spheres with a single size and with varying sizes. In addition, structures can also be simulated with various volume fractions. The results presented are divided into two parts: first, the ball shall be deemed as solid grains, ie the matrix phase represents the porosity, the second, the spheres are considered as pores. In this case the matrix phase represents the solid phase. The simulations in both cases are the same, but the simulated structures are intrinsically different. To validate the results presented by the program, simulations were performed by varying the amount of grain, the grain size distribution and void fraction in the structure. All results showed statistically reliable and consistent with those presented in the literature. The mean values and distributions of stereological parameters measured, such as intercept linear section of perimeter area, sectional area and mean free path are in agreement with the results obtained in the literature for the structures simulated. The results may help the understanding of real structures.