971 resultados para Compact embedding
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
Koalesenssi on ilmiö, jossa dispergoidun faasin pisarat pyrkivät muodostamaan suurempia pisaroita kunnes erotettava faasi muodostuu. Koalesenssi tapahtuu kolmessa päävaiheessa, jotka ovat lähestyminen, kiinnittyminen ja irrotus. Lähestymiseen vaikuttavat mekanismit ovat muuan muassa sieppaus, diffuusio, törmäysvaikutus, sedimentaatio, sähköiset repul-siovoimat ja van der Waalsin voimat. Kiinnittymisvaiheessa dispergoidun faasin pisarat syrjäyttävät väliaineen nestekalvon samalla kostuttaen väliaineen pinnan. Irrotusvaiheessa pisaran hydrodynaaminen voima voittaa pisaran ja väliaineen välisen adheesiovoiman. Koalesenssin tehokkuuteen vaikuttavat useat eri parametrit kuten virtausnopeus, pedin ominaisuudet, väliaineen ominaisuudet sekä emulsion ominaisuudet. Nämä kaikki asiat tulee ottaa huomioon koalesenssisuodatuksen suunnittelussa. Koalesenssisuodatus lukeutuu syväsuodatusmenetelmiin, jotka on ollut käytössä jo yli 100 vuotta. Koalesenssisuodatusmenetelmä on tehokas menetelmä pienten pisaroiden erottami-seen. Menetelmää käytetään esimerkiksi öljyisten jätevesien puhdistuksessa. Teollisen öljyn syväsuodatuksen etuihin kuuluu muun muassa sen kompakti koko, alhaisemmat käyt-tökustannukset, korkea erotusaste, kyky erotella pienetkin pisarat sekä helppo operointi, automatisointi ja huolto. Suurin haittapuoli on kuitenkin väliaineen tukkeutuminen, joten prosessi vaatii puhdistuksen tai väliaineen uusimisen. Tämän kandidaatintyön tarkoituksena oli koota kirjallisuustyö öljyn koalesenssisuodatuk-sesta. Työssä kartoitettiin koalesenssisuodatuksen lähtökohdat, teoria, tärkeimmät teolli-suuden sovellukset sekä väliaineet.
Resumo:
Bone marrow fibrosis occurs in association with a number of pathological states. Despite the extensive fibrosis that sometimes characterizes renal osteodystrophy, little is known about the factors that contribute to marrow accumulation of fibrous tissue. Because circulating cytokines are elevated in uremia, possibly in response to elevated parathyroid hormone levels, we have examined bone biopsies from 21 patients with end-stage renal disease and secondary hyperparathyroidism. Bone sections were stained with antibodies to human interleukin-1alpha (IL-1alpha), IL-6, IL-11, tumor necrosis factor-alpha (TNF-alpha) and transforming growth factor-ß (TGF-ß) using an undecalcified plastic embedding method. Intense staining for IL-1alpha, IL-6, TNF-alpha and TGF-ß was evident within the fibrotic tissue of the bone marrow while minimal IL-11 was detected. The extent of cytokine deposition corresponded to the severity of fibrosis, suggesting their possible involvement in the local regulation of the fibrotic response. Because immunoreactive TGF-ß and IL-6 were also detected in osteoblasts and osteocytes, we conclude that selective cytokine accumulation may have a role in modulating bone and marrow cell function in parathyroid-mediated uremic bone disease.
Resumo:
It is well known that the interaction of polyelectrolytes with oppositely charged surfactants leads to an associative phase separation; however, the phase behavior of DNA and oppositely charged surfactants is more strongly associative than observed in other systems. A precipitate is formed with very low amounts of surfactant and DNA. DNA compaction is a general phenomenon in the presence of multivalent ions and positively charged surfaces; because of the high charge density there are strong attractive ion correlation effects. Techniques like phase diagram determinations, fluorescence microscopy, and ellipsometry were used to study these systems. The interaction between DNA and catanionic mixtures (i.e., mixtures of cationic and anionic surfactants) was also investigated. We observed that DNA compacts and adsorbs onto the surface of positively charged vesicles, and that the addition of an anionic surfactant can release DNA back into solution from a compact globular complex between DNA and the cationic surfactant. Finally, DNA interactions with polycations, chitosans with different chain lengths, were studied by fluorescence microscopy, in vivo transfection assays and cryogenic transmission electron microscopy. The general conclusion is that a chitosan effective in promoting compaction is also efficient in transfection.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.
Resumo:
The objective of the present study was to develop a simplified low cost method for the collection and fixation of pediatric autopsy cells and to determine the quantitative and qualitative adequacy of extracted DNA. Touch and scrape preparations of pediatric liver cells were obtained from 15 cadavers at autopsy and fixed in 95% ethanol or 3:1 methanol:acetic acid. Material prepared by each fixation procedure was submitted to DNA extraction with the Wizard® genomic DNA purification kit for DNA quantification and five of the preparations were amplified by multiplex PCR (azoospermia factor genes). The amount of DNA extracted varied from 20 to 8,640 µg, with significant differences between fixation methods. Scrape preparation fixed in 95% ethanol provided larger amount of extracted DNA. However, the mean for all groups was higher than the quantity needed for PCR (50 ng) or Southern blot (500 ng). There were no qualitative differences among the different material and fixatives. The same results were also obtained for glass slides stored at room temperature for 6, 12, 18 and 24 months. We conclude that touch and scrape preparations fixed in 95% ethanol are a good source of DNA and present fewer limitations than cell culture, tissue paraffin embedding or freezing that require sterile material, culture medium, laboratory equipment and trained technicians. In addition, they are more practical and less labor intensive and can be obtained and stored for a long time at low cost.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
The cоncept оf sustainability-оriented innоvatiоn is recent and still under researched. The aim оf the Thesis is tо cоntribute tо the field and investigate hоw dо cоmpanies оperating in Pоland apply sustainability-оriented innоvatiоn (SОI) tо their cоre business activities, what are the differences between variоus business fоrms оf оrganizatiоn in terms оf SОI, and what type оf capabilities facilitate implementatiоn оf SОI. Given early stage оf empirical research оn sustainability-оriented innоvatiоn, an explоratоry-descriptive case study research strategy was taken applying qualitative methоds. 6 interviews with managers and CEОs оf 4 cоmpanies lоcated in Warsaw were cоnducted. In additiоn, twо academic expert panels with specialists frоm University оf Lоdz and Lappeenranta University оf Technоlоgy were carried оut in оrder tо suppоrt the findings. The study fоund оut that in case оf cоmpanies which purpоse is tо create pоsitive impact and develоp sustainable prоducts оr services by using innоvative apprоaches, SОI activities are embedded in оrganizatiоnal culture and prоcess sо that it is difficult tо differentiate between main business activities and SОI. In the оther twо cases SОI practices were in line with cоre business activities thus reflected the main оperatiоns and were determined as a part оf CSR strategy. Activities are industry specific and are cоntingent upоn resоurces and capabilities pоssessed. Amоng list оf success factоrs management suppоrt, CEО’s persоnal values, dedicated and mоtivated team, investments in research and develоpment, оrganizatiоnal culture, nоn-hierarchical cоmmunicatiоns channels, empоwerment оf emplоyees, prоvisiоn оf time and space fоr failures were identified as key оrganizatiоnal capabilities facilitating integratiоn оf SОI practices. Whereas market demand, NGОs’ pressure, regulatiоns enfоrced, access tо external funding, netwоrking and cооperating present external оr cоllabоrative capabilities suppоrting implementatiоn оf sustainability оriented innоvatiоn in cоmpanies. SОI takes a systemic apprоach that drives the transfоrmatiоn tо becоme sustainable business embedding and integrating sоcial, envirоnmental and ecоnоmic value creatiоn tоgether.
Resumo:
5-Bromo-2’-deoxyuridine (BrdUrd) has long been known to interfere with cell differentiation. We found that treatment ofBradysia hygida larvae with BrdUrd during DNA puff anlage formation in the polytene chromosomes of the salivary gland S1 region noticeably affects anlage morphology. However, it does not affect subsequent metamorphosis to the adult stage. The chromatin of the chromosomal sites that would normally form DNA puffs remains very compact and DNA puff expansion does not occur with administration of 4 to 8 mM BrdUrd. Injection of BrdUrd at different ages provoked a gradient of compaction of the DNA puff chromatin, leading to the formation of very small to almost normal puffs. By immunodetection, we show that the analogue is preferentially incorporated into the DNA puff anlages. When BrdUrd is injected in a mixture with thymidine, it is not incorporated into the DNA, and normal DNA puffs form. Therefore, incorporation of this analogue into the amplified DNA seems to be the cause of this extreme compaction. Autoradiographic experiments and silver grains counting showed that this treatment decreases the efficiency of RNA synthesis at DNA puff anlages.
Resumo:
Ventilatory differences between rat strains and genders have been described but the morphology of the phrenic nerve has not been investigated in spontaneously hypertensive (SHR) and normotensive Wistar-Kyoto (WKY) rats. A descriptive and morphometric study of the phrenic nerves of male (N = 8) and female (N = 9) SHR, and male (N = 5) and female (N = 6) WKY is presented. After arterial pressure and heart rate recordings, the phrenic nerves of 20-week-old animals were prepared for epoxy resin embedding and light microscopy. Morphometric analysis performed with the aid of computer software that took into consideration the fascicle area and diameter, as well as myelinated fiber profile and Schwann cell nucleus number per area. Phrenic nerves were generally larger in males than in females on both strains but larger in WKY compared to SHR for both genders. Myelinated fiber numbers (male SHR = 228 ± 13; female SHR = 258 ± 4; male WKY = 382 ± 23; female WKY = 442 ± 11 for proximal right segments) and density (N/mm²; male SHR = 7048 ± 537; female SHR = 10355 ± 359; male WKY = 9457 ± 1437; female WKY = 14351 ± 1448) for proximal right segments) were significantly larger in females of both groups and remarkably larger in WKY than SHR for both genders. Strain and gender differences in phrenic nerve myelinated fiber number are described for the first time in this experimental model of hypertension, indicating the need for thorough functional studies of this nerve in male and female SHR.
Resumo:
Wind is one of the most compelling forms of indirect solar energy. Available now, the conversion of wind power into electricity is and will continue to be an important element of energy self-sufficiency planning. This paper is one in a series intended to report on the development of a new type of generator for wind energy; a compact, high-power, direct-drive permanent magnet synchronous generator (DD-PMSG) that uses direct liquid cooling (LC) of the stator windings to manage Joule heating losses. The main param-eters of the subject LC DD-PMSG are 8 MW, 3.3 kV, and 11 Hz. The stator winding is cooled directly by deionized water, which flows through the continuous hollow conductor of each stator tooth-coil winding. The design of the machine is to a large degree subordinate to the use of these solid-copper tooth-coils. Both steady-state and timedependent temperature distributions for LC DD-PMSG were examined with calculations based on a lumpedparameter thermal model, which makes it possible to account for uneven heat loss distribution in the stator conductors and the conductor cooling system. Transient calculations reveal the copper winding temperature distribution for an example duty cycle during variable-speed wind turbine operation. The cooling performance of the liquid cooled tooth-coil design was predicted via finite element analysis. An instrumented cooling loop featuring a pair of LC tooth-coils embedded in a lamination stack was built and laboratory tested to verify the analytical model. Predicted and measured results were in agreement, confirming the predicted satisfactory operation of the LC DD-PMSG cooling technology approach as a whole.
Resumo:
As increasing efficiency of a wind turbine gearbox, more power can be transferred from rotor blades to generator and less power is used to cause wear and heating in the gearbox. By using a simulation model, behavior of the gearbox can be studied before creating expensive prototypes. The objective of the thesis is to model a wind turbine gearbox and its lubrication system to study power losses and heat transfer inside the gearbox and to study the simulation methods of the used software. Software used to create the simulation model is Siemens LMS Imagine.Lab AMESim, which can be used to create one-dimensional mechatronic system simulation models from different fields of engineering. When combining components from different libraries it is possible to create a simulation model, which includes mechanical, thermal and hydraulic models of the gearbox. Results for mechanical, thermal, and hydraulic simulations are presented in the thesis. Due to the large scale of the wind turbine gearbox and the amount of power transmitted, power loss calculations from AMESim software are inaccurate and power losses are modelled as constant efficiency for each gear mesh. Starting values for simulation in thermal and hydraulic simulations were chosen from test measurements and from empirical study as compact and complex design of gearbox prevents accurate test measurements. In further studies to increase the accuracy of the simulation model, components used for power loss calculations needs to be modified and values for unknown variables are needed to be solved through accurate test measurements.
Resumo:
In recent years, technological advancements in microelectronics and sensor technologies have revolutionized the field of electrical engineering. New manufacturing techniques have enabled a higher level of integration that has combined sensors and electronics into compact and inexpensive systems. Previously, the challenge in measurements was to understand the operation of the electronics and sensors, but this has now changed. Nowadays, the challenge in measurement instrumentation lies in mastering the whole system, not just the electronics. To address this issue, this doctoral dissertation studies whether it would be beneficial to consider a measurement system as a whole from the physical phenomena to the digital recording device, where each piece of the measurement system affects the system performance, rather than as a system consisting of small independent parts such as a sensor or an amplifier that could be designed separately. The objective of this doctoral dissertation is to describe in depth the development of the measurement system taking into account the challenges caused by the electrical and mechanical requirements and the measurement environment. The work is done as an empirical case study in two example applications that are both intended for scientific studies. The cases are a light sensitive biological sensor used in imaging and a gas electron multiplier detector for particle physics. The study showed that in these two cases there were a number of different parts of the measurement system that interacted with each other. Without considering these interactions, the reliability of the measurement may be compromised, which may lead to wrong conclusions about the measurement. For this reason it is beneficial to conceptualize the measurement system as a whole from the physical phenomena to the digital recording device where each piece of the measurement system affects the system performance. The results work as examples of how a measurement system can be successfully constructed to support a study of sensors and electronics.