905 resultados para lab on a chip
Resumo:
The study reported here is part of a large project for evaluation of the Thermo-Chemical Accumulator (TCA), a technology under development by the Swedish company ClimateWell AB. The studies concentrate on the use of the technology for comfort cooling. This report concentrates on measurements in the laboratory, modelling and system simulation. The TCA is a three-phase absorption heat pump that stores energy in the form of crystallised salt, in this case Lithium Chloride (LiCl) with water being the other substance. The process requires vacuum conditions as with standard absorption chillers using LiBr/water. Measurements were carried out in the laboratories at the Solar Energy Research Center SERC, at Högskolan Dalarna as well as at ClimateWell AB. The measurements at SERC were performed on a prototype version 7:1 and showed that this prototype had several problems resulting in poor and unreliable performance. The main results were that: there was significant corrosion leading to non-condensable gases that in turn caused very poor performance; unwanted crystallisation caused blockages as well as inconsistent behaviour; poor wetting of the heat exchangers resulted in relatively high temperature drops there. A measured thermal COP for cooling of 0.46 was found, which is significantly lower than the theoretical value. These findings resulted in a thorough redesign for the new prototype, called ClimateWell 10 (CW10), which was tested briefly by the authors at ClimateWell. The data collected here was not large, but enough to show that the machine worked consistently with no noticeable vacuum problems. It was also sufficient for identifying the main parameters in a simulation model developed for the TRNSYS simulation environment, but not enough to verify the model properly. This model was shown to be able to simulate the dynamic as well as static performance of the CW10, and was then used in a series of system simulations. A single system model was developed as the basis of the system simulations, consisting of a CW10 machine, 30 m2 flat plate solar collectors with backup boiler and an office with a design cooling load in Stockholm of 50 W/m2, resulting in a 7.5 kW design load for the 150 m2 floor area. Two base cases were defined based on this: one for Stockholm using a dry cooler with design cooling rate of 30 kW; one for Madrid with a cooling tower with design cooling rate of 34 kW. A number of parametric studies were performed based on these two base cases. These showed that the temperature lift is a limiting factor for cooling for higher ambient temperatures and for charging with fixed temperature source such as district heating. The simulated evacuated tube collector performs only marginally better than a good flat plate collector if considering the gross area, the margin being greater for larger solar fractions. For 30 m2 collector a solar faction of 49% and 67% were achieved for the Stockholm and Madrid base cases respectively. The average annual efficiency of the collector in Stockholm (12%) was much lower than that in Madrid (19%). The thermal COP was simulated to be approximately 0.70, but has not been possible to verify with measured data. The annual electrical COP was shown to be very dependent on the cooling load as a large proportion of electrical use is for components that are permanently on. For the cooling loads studied, the annual electrical COP ranged from 2.2 for a 2000 kWh cooling load to 18.0 for a 21000 kWh cooling load. There is however a potential to reduce the electricity consumption in the machine, which would improve these figures significantly. It was shown that a cooling tower is necessary for the Madrid climate, whereas a dry cooler is sufficient for Stockholm although a cooling tower does improve performance. The simulation study was very shallow and has shown a number of areas that are important to study in more depth. One such area is advanced control strategy, which is necessary to mitigate the weakness of the technology (low temperature lift for cooling) and to optimally use its strength (storage).
Resumo:
In a natural experiment, this paper studies the impact of an informal sanctioning mechanism on individuals’ voluntary contribution to a public good. Cross-country skiers’ actual cash contributions in two ski resorts, one with and one without an informal sanctioning system, are used. I find the contributing share to be higher in the informal sanctioning system (79 percent) than in the non-sanctioning system (36 percent). Previous studies in one-shot public good situations have found an increasing conditional contribution (CC) function, i.e. the relationship between expected average contributions of other group members and the individual’s own contribution. In contrast, the present results suggest that the CC-function in the non-sanctioning system is non-increasing at high perceived levels of others’ contribution. This relationship deserves further testing in lab.
Resumo:
This shows a view of the Carpentry Lab at the New York Trade School. Students can be seen working on a number of different projects. Black and white photograph.
Resumo:
Com o advento dos processos submicrônicos, a capacidade de integração de transistores tem atingido níveis que possibilitam a construção de um sistema completo em uma única pastilha de silício. Esses sistemas, denominados sistemas integrados, baseiam-se no reuso de blocos previamente projetados e verificados, os quais são chamados de núcleos ou blocos de propriedade intelectual. Os sistemas integrados atuais incluem algumas poucas dezenas de núcleos, os quais são interconectados por meio de arquiteturas de comunicação baseadas em estruturas dedicadas de canais ponto-a-ponto ou em estruturas reutilizáveis constituídas por canais multiponto, denominadas barramentos. Os futuros sistemas integrados irão incluir de dezenas a centenas de núcleos em um mesmo chip com até alguns bilhões de transistores, sendo que, para atender às pressões do mercado e amortizar os custos de projeto entre vários sistemas, é importante que todos os seus componentes sejam reutilizáveis, incluindo a arquitetura de comunicação. Das arquiteturas utilizadas atualmente, o barramento é a única que oferece reusabilidade. Porém, o seu desempenho em comunicação e o seu consumo de energia degradam com o crescimento do sistema. Para atender aos requisitos dos futuros sistemas integrados, uma nova alternativa de arquitetura de comunicação tem sido proposta na comunidade acadêmica. Essa arquitetura, denominada rede-em-chip, baseia-se nos conceitos utilizados nas redes de interconexão para computadores paralelos. Esta tese se situa nesse contexto e apresenta uma arquitetura de rede-em-chip e um conjunto de modelos para a avaliação de área e desempenho de arquiteturas de comunicação para sistemas integrados. A arquitetura apresentada é denominada SoCIN (System-on-Chip Interconnection Network) e apresenta como diferencial o fato de poder ser dimensionada de modo a atender a requisitos de custo e desempenho da aplicação alvo. Os modelos desenvolvidos permitem a estimativa em alto nível da área em silício e do desempenho de arquiteturas de comunicação do tipo barramento e rede-em-chip. São apresentados resultados que demonstram a efetividade das redes-em-chip e indicam as condições que definem a aplicabilidade das mesmas.
Resumo:
Com as recentes tecnologias de fabricação é possível integrar milhões de transistores em um único chip, permitindo a criação dos chamados System-on-Chip (SoCs), que integram em um único chip um grande número de componentes (tipicamente blocos reutilizáveis conhecidos por núcleos). Quanto mais complexos forem estes sistemas, melhores técnicas de projeto serão necessárias para também reduzir o tempo e custo do projeto. Uma destas técnicas, chamada de Network-on-Chip (NoC), permite melhorar a performance da comunicação entre os núcleos e, ao mesmo tempo, fornecer uma plataforma de comunicação escalável e que pode ser reutilizada para um grande número de sistemas. Uma NoC pode ser definida como uma estrutura de roteadores e canais ponto-a-ponto que interconectam os núcleos de um sistema, provendo o suporte de comunicação entre eles. Os dados são transmitidos pela rede na forma de mensagens, que podem ser divididas em unidades menores chamadas de pacote. Uma das desvantagens desta plataforma de comunicação é o impacto na área do sistema causado pelos roteadores. Dentro deste contexto, este trabalho apresenta uma arquitetura de roteador de baixo custo, com o objetivo de permitir o uso de NoCs em sistemas onde a área do roteador representará um grande impacto no custo do sistema. A arquitetura deste roteador, chamado de Tonga, é baseada em um roteador chamado RASoC, um soft-core para SoCs. Nesta dissertação será apresentada também uma rede heterogênea, baseada na rede SoCIN, e composta por dois tipos de roteadores – RASoC e Tonga. Estes roteadores visam diferentes objetivos: Rasoc alcança uma maior performance comparada ao Tonga, mas ocupa área consideravelmente maior. Potencialmente, uma NoC heterogênea otimizada pode ser desenvolvida combinando estes roteadores, procurando o melhor compromisso entre área e latência. Os modelos desenvolvidos permitem a estimativa de área e do desempenho das arquiteturas de comunicação propostas e são apresentados resultados de performance para algumas aplicações.
Resumo:
Investigações anteriores relacionadas ao schadenfreude concentraram-se nos fatores que provocam o prazer no infortúnio do outro. A presente pesquisa tem como objetivo investigar o impacto do schadenfreude na tomada de decisão. Dois estudos (um em laboratório e uma em campo) abordam o impacto do schadenfreude em decisões realizadas no passado e no futuro em eventos desportivos. O primeiro estudo confronta sentimentos de orgulho em uma vitória do time favorito contra os sentimentos de perda schadenfreude de uma equipe rival. Os resultados mostraram que as pessoas preferiam enviar notícias sobre a vitória da equipe favorita (orgulho) ao invés da perda do time rival (schadenfreude) quando as diferenças de pontuação no jogo eram pequenas (por exemplo: time favorito 1 x 0 outro, contra, o time rival 0 x 1 favorito). No entanto, as pessoas eram mais propensas a fazer a escolha schadenfreude (por exemplo, escolher o envio de uma notícia sobre a derrota de um time rival) quando o resultado era alto (por exemplo, time favorito 5 x 0 rival, contra, time rival 0 x 5 favorito). O segundo estudo no campo examina como schadenfreude influencia a vontade de apostar contra um time rival. Para responder a esse problema, a preferência da equipe do participante é avaliada (Participantes que apoiam time alvo contra os que apoiam o rival). Uma manipulação de louvor é adicionada, tal que os consumidores vejam ou não um elogio à equipe alvo enquanto eles estão fazendo uma aposta sobre o resultado da partida. Os resultados mostram que os torcedores do time alvo não foram influenciados pela manipulação de louvor. No entanto, torcedores do time rival aumentaram sua probabilidade de aposta contra o time alvo (ou seja, mostraram um comportamento que envolve o schadenfreude) quando esta foi elogiada antes do jogo.
Resumo:
Though many of those who decided to report wrongdoings in their organizations were able to tell their stories (e.g. Bamford, 2014, Armenakis 2004), it is fair to say that there is still much left to uncover. The paper aims to contribute to the literature in three ways. First, it provides preliminary evidence that the wrongdoing linked with individual financial loss leads to higher whistleblowing rate. Secondly, it shows how the experience of anger is related to the higher likelihood to report the wrongdoer but only if the wrongful act is perceived as a cause of one’s financial loss. Finally, the paper establishes first steps for the future development of an experimental procedure that would enable to predict, and measure whistleblowing behavior in the lab environment.
Resumo:
The number of applications based on embedded systems grows significantly every year, even with the fact that embedded systems have restrictions, and simple processing units, the performance of these has improved every day. However the complexity of applications also increase, a better performance will always be necessary. So even such advances, there are cases, which an embedded system with a single unit of processing is not sufficient to achieve the information processing in real time. To improve the performance of these systems, an implementation with parallel processing can be used in more complex applications that require high performance. The idea is to move beyond applications that already use embedded systems, exploring the use of a set of units processing working together to implement an intelligent algorithm. The number of existing works in the areas of parallel processing, systems intelligent and embedded systems is wide. However works that link these three areas to solve any problem are reduced. In this context, this work aimed to use tools available for FPGA architectures, to develop a platform with multiple processors to use in pattern classification with artificial neural networks
Resumo:
The world tendency is the increase of the productivity and the production of pieces more and more sophisticated, with high degree of geometric and dimensional tolerances, with good surface finish and low cost. Rectification is responsible for the final finish in the machining process of a material. However, damages generated in this production phase affect all the resources used in the previous processes. Great part of the problems happennig in the rectification process is due to the enormous temperature generated in this activity because of the machining conditions. The dive speed, which is directly related to the productivity, is considered responsible for the damages that occur during rectification, limiting its values to those that do not cause such damages. In this work, through the variation of the dive speed in the process of cylindrical grinding of type ABNT D6 steel, rationalizing the application of two cutting fluids and using a CBN (cubic boron nitrate) abrasive wheel with vitrified blond, the influence of the dive speed on the surface damages of hardened steels was evaluated. The results allowed to say that the dive speed, associated to an efficient cooling and lubrication, didn't provoke thermal damages (including heated zones, cracks and tension stresses) to the material. Residual stresses and the roughness of rectified materials presented a correlation with the machining conditions. The work concluded that it is possible to increase the productivity without provoking damages in the rectified components.
Resumo:
This work measured the effect of milling parameters on the surface integrity of low-carbon alloy steel. The Variance Analysis showed that only depth of cut did not influence on the workpiece roughness and the Pearson's Coefficient indicated that cutting speed was more influent than tool feed. All cutting parameters introduced tensile residual stress in workpiece surface. The chip formation mechanism depended specially on cutting speed and influenced on the roughness and residual stress of workpiece.
Resumo:
It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared
Resumo:
Alongside the advances of technologies, embedded systems are increasingly present in our everyday. Due to increasing demand for functionalities, many tasks are split among processors, requiring more efficient communication architectures, such as networks on chip (NoC). The NoCs are structures that have routers with channel point-to-point interconnect the cores of system on chip (SoC), providing communication. There are several networks on chip in the literature, each with its specific characteristics. Among these, for this work was chosen the Integrated Processing System NoC (IPNoSyS) as a network on chip with different characteristics compared to general NoCs, because their routing components also accumulate processing function, ie, units have functional able to execute instructions. With this new model, packets are processed and routed by the router architecture. This work aims at improving the performance of applications that have repetition, since these applications spend more time in their execution, which occurs through repeated execution of his instructions. Thus, this work proposes to optimize the runtime of these structures by employing a technique of instruction-level parallelism, in order to optimize the resources offered by the architecture. The applications are tested on a dedicated simulator and the results compared with the original version of the architecture, which in turn, implements only packet level parallelism
Resumo:
Objectives: The aim of this in vitro study was to assess the effects of saliva substitutes (modified with respect to calcium, phosphates, and fluorides) in combination with a high-concentrated fluoride toothpaste on demineralised dentin.Methods: Before and after demineralisation of bovine dentin specimens (subsurface lesions; 37 degrees C, pH 5.0, 5 d), one-quarter of each specimen's surface was covered with nail varnish (control of sound/demineralised tissue). Subsequently, specimens were exposed to original Saliva natura (saturation with respect to octacalciumphosphate [S(OCP)]: 0.03; SN 0), or to three lab-produced Saliva natura modifications (S(OCP): 1, 2, and 3; SN 1-3) for 2 and 5 weeks (37 degrees C). An aqueous solution (S(OCP): 2.5) served as positive control (PC). Two times daily (2 min each), Duraphat toothpaste (5000 ppm F(-); Colgate)/saliva substitute slurry (ratio 1:3) was applied gently. Differences in mineral losses (Delta Delta Z) and lesion depths (Delta LD) between values before and after exposure were microradiographically evaluated.Results: After both treatment periods specimens immersed in SN 0 revealed significantly higher mineral losses (lower Delta Delta Z values) and lesion depths (lower Delta LD) compared to PC (p < 0.05; ANOVA). After 5 weeks, specimens stored in SN 1 and 2 showed significantly higher mineral losses compared to PC (p < 0.05), while those stored in SN 3 showed similar results (p > 0.05). No differences in mineral loss could be observed between SN 2 and 3 (p > 0.05).Conclusions: Under the conditions of this limited protocol, the combination of Saliva natura solutions slightly saturated with respect to OCP in combination with a high-concentrated fluoride toothpaste enabled remineralisation of dentin in vitro. Crown Copyright (c) 2009 Published by Elsevier Ltd. All rights reserved.