948 resultados para atom chip


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Specific glycosphingolipid antigens of Leishmania (L.) amazonensis amastigotes reactive with the monoclonal antibodies (MoAbs) ST-3, ST-4 and ST-5 were isolated, and their structure was partially elucidated by negative ion fast atom bombardment mass spectrometry. The glycan moieties of five antigens presented linear sequences of hexoses and N-acetylhexosamines ranging from four to six sugar residues, and the ceramide moieties were found to be composed by a sphingosine d18:1 and fatty acids 24:1 or 16:0. Affinities of the three monoclonal antibodies to amastigote glycosphingolipid antigens were also analyzed by ELISA. MoAb ST-3 reacted equally well with all glycosphingolipid antigens tested, whereas ST-4 and ST-5 presented higher affinities to glycosphingolipids with longer carbohydrate chains, with five or more sugar units (slow migrating bands on HPTLC). Macrophages isolated from footpad lesions of BALB/c mice infected with Leishmania (L.) amazonensis were incubated with MoAb ST-3 and, by indirect immunofluorescence, labeling was only detected on the parasite, whereas no fluorescence was observed on the surface of the infected macrophages, indicating that these glycosphingolipid antigens are not acquired from the host cell but synthesized by the amastigote. Intravenous administration of 125I-labeled ST-3 antibody to infected BALB/c mice showed that MoAb ST-3 accumulated significantly in the footpad lesions in comparison to blood and other tissues

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to study the effects of partial removal of wood hemicelluloses on the properties of kraft pulp.The work was conducted by extracting hemicelluloses (1) by a softwood chip pretreatment process prior to kraft pulping, (2) by alkaline extraction from bleached birch kraft pulp, and (3) by enzymatic treatment, xylanase treatment in particular, of bleached birch kraft pulp. The qualitative and quantitative changes in fibers and paper properties were evaluated. In addition, the applicability of the extraction concepts and hemicellulose-extracted birch kraft pulp as a raw material in papermaking was evaluated in a pilot-scale papermaking environment. The results showed that each examined hemicellulose extraction method has its characteristic effects on fiber properties, seen as differences in both the physical and chemical nature of the fibers. A prehydrolysis process prior to the kraft pulping process offered reductions in cooking time, bleaching chemical consumption and produced fibers with low hemicellulose content that are more susceptible to mechanically induced damages and dislocations. Softwood chip pretreatment for hemicellulose recovery prior to cooking, whether acidic or alkaline, had an impact on the physical properties of the non-refined and refined pulp. In addition, all the pretreated pulps exhibited slower beating response than the unhydrolyzed reference pulp. Both alkaline extraction and enzymatic (xylanase) treatment of bleached birch kraft pulp fibers indicated very selective hemicellulose removal, particularly xylan removal. Furthermore, these two hemicellulose-extracted birch kraft pulps were utilized in a pilot-scale papermaking environment in order to evaluate the upscalability of the extraction concepts. Investigations made using pilot paper machine trials revealed that some amount of alkalineextracted birch kraft pulp, with a 24.9% reduction in the total amount of xylan, could be used in the papermaking stock as a mixture with non-extracted pulp when producing 75 g/m2 paper. For xylanase-treated fibers there were no reductions in the mechanical properties of the 180 g/m2 paper produced compared to paper made from the control pulp, although there was a 14.2% reduction in the total amount of xylan in the xylanase-treated pulp compared to the control birch kraft pulp. This work emphasized the importance of the hemicellulose extraction method in providing new solutions to create functional fibers and in providing a valuable hemicellulose co-product stream. The hemicellulose removal concept therefore plays an important role in the integrated forest biorefinery scenario, where the target is to the co-production of hemicellulose-extracted pulp and hemicellulose-based chemicals or fuels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene is a material with extraordinary properties. Its mechanical and electrical properties are unparalleled but the difficulties in its production are hindering its breakthrough in on applications. Graphene is a two-dimensional material made entirely of carbon atoms and it is only a single atom thick. In this work, properties of graphene and graphene based materials are described, together with their common preparation techniques and related challenges. This Thesis concentrates on the topdown techniques, in which natural graphite is used as a precursor for the graphene production. Graphite consists of graphene sheets, which are stacked together tightly. In the top-down techniques various physical or chemical routes are used to overcome the forces keeping the graphene sheets together, and many of them are described in the Thesis. The most common chemical method is the oxidisation of graphite with strong oxidants, which creates a water-soluble graphene oxide. The properties of graphene oxide differ significantly from pristine graphene and, therefore, graphene oxide is often reduced to form materials collectively known as reduced graphene oxide. In the experimental part, the main focus is on the chemical and electrochemical reduction of graphene oxide. A novel chemical route using vanadium is introduced and compared to other common chemical graphene oxide reduction methods. A strong emphasis is placed on electrochemical reduction of graphene oxide in various solvents. Raman and infrared spectroscopy are both used in in situ spectroelectrochemistry to closely monitor the spectral changes during the reduction process. These in situ techniques allow the precise control over the reduction process and even small changes in the material can be detected. Graphene and few layer graphene were also prepared using a physical force to separate these materials from graphite. Special adsorbate molecules in aqueous solutions, together with sonic treatment, produce stable dispersions of graphene and few layer graphene sheets in water. This mechanical exfoliation method damages the graphene sheets considerable less than the chemical methods, although it suffers from a lower yield.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interaction of the product of H2O2 and (PhSe)2 with delta-aminolevulinate dehydratase (delta-ALA-D) from mammals and plants was investigated. (PhSe)2 inhibited rat hepatic delta-ALA-D with an IC50 of 10 µM but not the enzyme from cucumber leaves. The reaction of (PhSe)2 with H2O2 for 1 h increased the inhibitory potency of the original compound and the IC50 for animal delta-ALA-D inhibition was decreased from 10 to 2 µM. delta-ALA-D from cucumber leaves was also inhibited by the products of reaction of (PhSe)2 with H2O2 with an IC50 of 4 µM. The major product of reaction of (PhSe)2 with H2O2 was identified as seleninic acid and produced an intermediate with a lambdamax at 265 nm after reaction with t-BuSH. These results suggest that the interaction of (PhSe)2 with mammal delta-ALA-D requires the presence of cysteinyl residues in close proximity. Two cysteine residues in spatial proximity have been recently described for the mammalian enzyme. Analysis of the primary structure of plant delta-ALA-D did not reveal an analogous site. In contrast to (PhSe)2, seleninic acid, as a result of the higher electrophilic nature of its selenium atom, may react with additional cysteinyl residue(s) in mammalian delta-ALA-D and also with cysteinyl residues from cucumber leaves located at a site distinct from that found at the B and A sites in mammals. Although the interaction of organochalcogens with H2O2 may have some antioxidant properties, the formation of seleninic acid as a product of this reaction may increase the toxicity of organic chalcogens such as (PhSe)2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interpretation of oligonucleotide array experiments depends on the quality of the target cRNA used. cRNA target quality is assessed by quantitative analysis of the representation of 5' and 3' sequences of control genes using commercially available Test arrays. The Test array provides an economically priced means of determining the quality of labeled target prior to analysis on whole genome expression arrays. This manuscript validates the use of a duplex RT-PCR assay as a faster (6 h) and less expensive (6 were chosen and classified as degraded cRNAs, and 31 samples with a ß-actin 3'/5' ratio <6 were selected as good quality cRNAs. Blinded samples were then used for the RT-PCR assay. After gel electrophoresis, optical densities of the amplified 3' and 5' fragments of ß-actin were measured and the 3'/5' ratio was calculated. There was a strong correlation (r² = 0.6802) between the array and the RT-PCR ß-actin 3'/5' ratios. Moreover, the RT-PCR 3'/5' ratio was significantly different (P < 0.0001) between undegraded (mean ± SD, 0.34 ± 0.09) and degraded (1.71 ± 0.83) samples. None of the other parameters analyzed, such as i) the starting amount of RNA, ii) RNA quality assessed using the Bioanalyzer Chip technology, or iii) the concentration and OD260/OD280 ratio of the purified biotinylated cRNA, correlated with cRNA quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työssä tutkittiin polttoaineterminaalissa varastoitavan puupolttoaineen laadunmuutoksia. Tutkimuksessa tarkasteltiin hakettamattomien rankapuiden ja rankapuuhakkeen kosteuden ja lämpöarvon muutosta. Myös kuiva-ainetappiota tutkittiin aikaisempien tutkimusten perusteella. Tutkimusaineisto kerättiin Etelä-Savon Energian polttoaineterminaaleista. Kosteus-pitoisuuksia mitattiin Hydromette M2050 -pikakosteusmittarilla ja uunikuivaus-menetelmällä standardin SFS-EN 14774 mukaisesti. Tutkimuksessa huomattiin pikakosteusmittarin toimivan riittävän luotettavalla tasolla rankapuiden mittauksissa, mutta hakkeen mittauksissa mittari osoittautui toimimattomaksi. Varastointiaika ei vaikuttanut polttoaineiden lämpöarvoihin, mutta kosteuspitoisuus vaihteli suuresti. Tutkimustuloksista pääteltiin rangan kuivuvan terminaalivarastossa ja hakkeen kosteuden pysyvän vakiona. Energiasisällön puolesta rankapuuta voidaan varastoida yli 2 vuotta, mutta hakkeen varastointiaika tulisi pitää mahdollisimman lyhyenä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rough turning is an important form of manufacturing cylinder-symmetric parts. Thus far, increasing the level of automation in rough turning has included process monitoring methods or adaptive turning control methods that aim to keep the process conditions constant. However, in order to improve process safety, quality and efficiency, an adaptive turning control should be transformed into an intelligent machining system optimizing cutting values to match process conditions or to actively seek to improve process conditions. In this study, primary and secondary chatter and chip formation are studied to understand how to measure the effect of these phenomena to the process conditions and how to avoid undesired cutting conditions. The concept of cutting state is used to address the combination of these phenomena and the current use of the power capacity of the lathe. The measures to the phenomena are not developed based on physical measures, but instead, the severity of the measures is modelled against expert opinion. Based on the concept of cutting state, an expert system style fuzzy control system capable of optimizing the cutting process was created. Important aspects of the system include the capability to adapt to several cutting phenomena appearing at once, even if the said phenomena would potentially require conflicting control action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hiilinanojohteet ovat sähkönjohteita, joiden valmistuksessa on käytetty hiilinanoputkia, eli yhden atomikerroksen paksuisesta hiiliatomiverkosta koostuvia rakenteita. Hiilinanoputket ovat viime vuosina keränneet suurta mielenkiintoa erinomaisten fysikaalisten ominaisuuksiensa ansiosta. Tämän työn tavoitteena on selvittää, voitaisiinko hiilinanojohteiden sähkönjohtavuus saada riittävälle tasolle, jotta niillä saatettaisiin korvata nykyisiä kuparista valmistettuja johteita. Vaikka kuparilla on erinomainen johtavuus, sen käytöllä on omat heikkoutensa, kuten korkea hinta, virran ahtautuminen, suuri tiheys ja heikko mekaaninen kestävyys. Hiilinanojohteet voisivat olla yksi osa-alue kehitettäessä uusia energiatehokkaita ja ympäristöystävällisiä laitteita nyky-yhteiskunnan tarpeisiin. Työn tulosten perusteella voidaan todeta, että nykyisten hiilinanojohteiden sähkönjohtavuus on yhä liian pieni laajamittaiseen käyttöön. Johtavuus on kuitenkin lisääntynyt jatkuvasti viime vuosina. Kehitystyön avulla hiilimateriaalin potentiaalia saadaan hyödynnettyä koko ajan enemmän, ja ajan myötä hiilijohteista voi tulla varteenotettava kilpailija perinteisille johdemateriaaleille. Hiilinanojohteet tulevat luultavasti aluksi yleistymään käyttökohteissa, joissa niiden muut ominaisuudet täydentävät hyvin sähkönjohtavuutta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oxysterols are 27-carbon atom molecules resulting from autoxidation or enzymatic oxidation of cholesterol. They are present in numerous foodstuffs and have been demonstrated to be present at increased levels in the plasma of patients with cardiovascular diseases and in atherosclerotic lesions. Thus, their role in lipid disorders is widely suspected, and they might also be involved in important degenerative diseases such as Alzheimer's disease, osteoporosis, and age-related macular degeneration. Since atherosclerosis is associated with the presence of apoptotic cells and with oxidative and inflammatory processes, the ability of some oxysterols, especially 7-ketocholesterol and 7β-hydroxycholesterol, to trigger cell death, activate inflammation, and modulate lipid homeostasis is being extensively studied, especially in vitro. Thus, since there are a number of essential considerations regarding the physiological/pathophysiological functions and activities of the different oxysterols, it is important to determine their biological activities and identify their signaling pathways, when they are used either alone or as mixtures. Oxysterols may have cytotoxic, oxidative, and/or inflammatory effects, or none whatsoever. Moreover, a substantial accumulation of polar lipids in cytoplasmic multilamellar structures has been observed with cytotoxic oxysterols, suggesting that cytotoxic oxysterols are potent inducers of phospholipidosis. This basic knowledge about oxysterols contributes to a better understanding of the associated pathologies and may lead to new treatments and new drugs. Since oxysterols have a number of biological activities, and as oxysterol-induced cell death is assumed to take part in degenerative pathologies, the present review will focus on the cytotoxic activities of these compounds, the corresponding cell death signaling pathways, and associated events (oxidation, inflammation, and phospholipidosis).