952 resultados para Timber chip


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to study the effects of partial removal of wood hemicelluloses on the properties of kraft pulp.The work was conducted by extracting hemicelluloses (1) by a softwood chip pretreatment process prior to kraft pulping, (2) by alkaline extraction from bleached birch kraft pulp, and (3) by enzymatic treatment, xylanase treatment in particular, of bleached birch kraft pulp. The qualitative and quantitative changes in fibers and paper properties were evaluated. In addition, the applicability of the extraction concepts and hemicellulose-extracted birch kraft pulp as a raw material in papermaking was evaluated in a pilot-scale papermaking environment. The results showed that each examined hemicellulose extraction method has its characteristic effects on fiber properties, seen as differences in both the physical and chemical nature of the fibers. A prehydrolysis process prior to the kraft pulping process offered reductions in cooking time, bleaching chemical consumption and produced fibers with low hemicellulose content that are more susceptible to mechanically induced damages and dislocations. Softwood chip pretreatment for hemicellulose recovery prior to cooking, whether acidic or alkaline, had an impact on the physical properties of the non-refined and refined pulp. In addition, all the pretreated pulps exhibited slower beating response than the unhydrolyzed reference pulp. Both alkaline extraction and enzymatic (xylanase) treatment of bleached birch kraft pulp fibers indicated very selective hemicellulose removal, particularly xylan removal. Furthermore, these two hemicellulose-extracted birch kraft pulps were utilized in a pilot-scale papermaking environment in order to evaluate the upscalability of the extraction concepts. Investigations made using pilot paper machine trials revealed that some amount of alkalineextracted birch kraft pulp, with a 24.9% reduction in the total amount of xylan, could be used in the papermaking stock as a mixture with non-extracted pulp when producing 75 g/m2 paper. For xylanase-treated fibers there were no reductions in the mechanical properties of the 180 g/m2 paper produced compared to paper made from the control pulp, although there was a 14.2% reduction in the total amount of xylan in the xylanase-treated pulp compared to the control birch kraft pulp. This work emphasized the importance of the hemicellulose extraction method in providing new solutions to create functional fibers and in providing a valuable hemicellulose co-product stream. The hemicellulose removal concept therefore plays an important role in the integrated forest biorefinery scenario, where the target is to the co-production of hemicellulose-extracted pulp and hemicellulose-based chemicals or fuels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interpretation of oligonucleotide array experiments depends on the quality of the target cRNA used. cRNA target quality is assessed by quantitative analysis of the representation of 5' and 3' sequences of control genes using commercially available Test arrays. The Test array provides an economically priced means of determining the quality of labeled target prior to analysis on whole genome expression arrays. This manuscript validates the use of a duplex RT-PCR assay as a faster (6 h) and less expensive (6 were chosen and classified as degraded cRNAs, and 31 samples with a ß-actin 3'/5' ratio <6 were selected as good quality cRNAs. Blinded samples were then used for the RT-PCR assay. After gel electrophoresis, optical densities of the amplified 3' and 5' fragments of ß-actin were measured and the 3'/5' ratio was calculated. There was a strong correlation (r² = 0.6802) between the array and the RT-PCR ß-actin 3'/5' ratios. Moreover, the RT-PCR 3'/5' ratio was significantly different (P < 0.0001) between undegraded (mean ± SD, 0.34 ± 0.09) and degraded (1.71 ± 0.83) samples. None of the other parameters analyzed, such as i) the starting amount of RNA, ii) RNA quality assessed using the Bioanalyzer Chip technology, or iii) the concentration and OD260/OD280 ratio of the purified biotinylated cRNA, correlated with cRNA quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työssä tutkittiin polttoaineterminaalissa varastoitavan puupolttoaineen laadunmuutoksia. Tutkimuksessa tarkasteltiin hakettamattomien rankapuiden ja rankapuuhakkeen kosteuden ja lämpöarvon muutosta. Myös kuiva-ainetappiota tutkittiin aikaisempien tutkimusten perusteella. Tutkimusaineisto kerättiin Etelä-Savon Energian polttoaineterminaaleista. Kosteus-pitoisuuksia mitattiin Hydromette M2050 -pikakosteusmittarilla ja uunikuivaus-menetelmällä standardin SFS-EN 14774 mukaisesti. Tutkimuksessa huomattiin pikakosteusmittarin toimivan riittävän luotettavalla tasolla rankapuiden mittauksissa, mutta hakkeen mittauksissa mittari osoittautui toimimattomaksi. Varastointiaika ei vaikuttanut polttoaineiden lämpöarvoihin, mutta kosteuspitoisuus vaihteli suuresti. Tutkimustuloksista pääteltiin rangan kuivuvan terminaalivarastossa ja hakkeen kosteuden pysyvän vakiona. Energiasisällön puolesta rankapuuta voidaan varastoida yli 2 vuotta, mutta hakkeen varastointiaika tulisi pitää mahdollisimman lyhyenä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rough turning is an important form of manufacturing cylinder-symmetric parts. Thus far, increasing the level of automation in rough turning has included process monitoring methods or adaptive turning control methods that aim to keep the process conditions constant. However, in order to improve process safety, quality and efficiency, an adaptive turning control should be transformed into an intelligent machining system optimizing cutting values to match process conditions or to actively seek to improve process conditions. In this study, primary and secondary chatter and chip formation are studied to understand how to measure the effect of these phenomena to the process conditions and how to avoid undesired cutting conditions. The concept of cutting state is used to address the combination of these phenomena and the current use of the power capacity of the lathe. The measures to the phenomena are not developed based on physical measures, but instead, the severity of the measures is modelled against expert opinion. Based on the concept of cutting state, an expert system style fuzzy control system capable of optimizing the cutting process was created. Important aspects of the system include the capability to adapt to several cutting phenomena appearing at once, even if the said phenomena would potentially require conflicting control action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microscopic visualization, especially in transparent micromodels, can provide valuable information to understand the transport phenomena at pore scale in different process occurring in porous materials (food, timber, soils, etc.). Micromodels studies focus mainly on the observation of multi-phase flow, which presents a greater proximity to reality. The aim of this study was to study the process of flexography and its application in the manufacture of polyester resin transparent micromodels and its application to carrots. Materials used to implement a flexo station for micromodels construction were thermoregulated water bath, exposure chamber to UV light, photosensitive substance (photopolymer), RTV silicone polyester resin, and glass plates. In this paper, data on size distribution of a particular kind of carrot we used, and a transparent micromodel with square cross-section as well as a Log-normal pore size distribution with pore radii ranging from 10 to 110 µm (average of 22 µm and micromodel size of 10 × 10 cm) were built. Finally, it stresses that it has successfully implemented the protocol processing 2D polyester resin transparent micromodels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Potato pulp waste (PPW) drying was investigated under different experimental conditions (temperatures from 50 to 70 °C and air flow from 0.06 to 0.092 m³ m- 2 s- 1) as a possible way to recover the waste generated by potato chip industries and to select the best-fit model to the experimental results of PPW drying. As a criterion to evaluate the fitting of mathematical models, a method based on the sum of the scores assigned to the four evaluated statistical parameters was used: regression coefficient (R²), relative mean error P (%), root mean square error (RMSE), and reduced chi-square (χ²). The results revealed that temperature and air velocity are important parameters to reduce PPW drying time. The models Midilli and Diffusion had the lowest sum values, i.e., with the best fit to the drying data, satisfactorily representing the drying kinetics of PPW.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

LiDAR is an advanced remote sensing technology with many applications, including forest inventory. The most common type is ALS (airborne laser scanning). The method is successfully utilized in many developed markets, where it is replacing traditional forest inventory methods. However, it is innovative for Russian market, where traditional field inventory dominates. ArboLiDAR is a forest inventory solution that engages LiDAR, color infrared imagery, GPS ground control plots and field sample plots, developed by Arbonaut Ltd. This study is an industrial market research for LiDAR technology in Russia focused on customer needs. Russian forestry market is very attractive, because of large growing stock volumes. It underwent drastic changes in 2006, but it is still in transitional stage. There are several types of forest inventory, both with public and private funding. Private forestry enterprises basically need forest inventory in two cases – while making coupe demarcation before timber harvesting and as a part of forest management planning, that is supposed to be done every ten years on the whole leased territory. The study covered 14 companies in total that include private forestry companies with timber harvesting activities, private forest inventory providers, state subordinate companies and forestry software developer. The research strategy is multiple case studies with semi-structured interviews as the main data collection technique. The study focuses on North-West Russia, as it is the most developed Russian region in forestry. The research applies the Voice of the Customer (VOC) concept to elicit customer needs of Russian forestry actors and discovers how these needs are met. It studies forest inventory methods currently applied in Russia and proposes the model of method comparison, based on Multi-criteria decision making (MCDM) approach, mainly on Analytical Hierarchy Process (AHP). Required product attributes are classified in accordance with Kano model. The answer about suitability of LiDAR technology is ambiguous, since many details should be taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X-ray computed log tomography has always been applied for qualitative reconstructions. In most cases, a series of consecutive slices of the timber are scanned to estimate the 3D image reconstruction of the entire log. However, the unexpected movement of the timber under study influences the quality of image reconstruction since the position and orientation of some scanned slices can be incorrectly estimated. In addition, the reconstruction time remains a significant challenge for practical applications. The present study investigates the possibility to employ modern physics engines for the problem of estimating the position of a moving rigid body and its scanned slices which are subject to X-ray computed tomography. The current work includes implementations of the extended Kalman filter and an algebraic reconstruction method for fan-bean computer tomography. In addition, modern techniques such as NVidia PhysX and CUDA are used in current study. As the result, it is numerically shown that it is possible to apply the extended Kalman filter together with a real-time physics engine, known as PhysX, in order to determine the position of a moving object. It is shown that the position of the rigid body can be determined based only on reconstructions of its slices. However, the simulation of the body movement sometimes is subject to an error during Kalman filter employment as PhysX is not always able to continue simulating the movement properly because of incorrect state estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Raskaankaluston ajoneuvojen aerodynaaminen kehitys on kulkenut väärään suuntaan jo vuosisadan verran ja niiden muodon määräävä tekijä on kuljetustilan maksimointi ja toi-minnallisuus. Lähiaikoina on astumassa EU:ssa uusi direktiivi voimaan, joka sallii lisämas-san käyttämisen aerodynaamisiin lisäosiin. Työn tarkoituksena on tutkia mahdollisuuksia parantaa hakeperävaunun aerodynamiikkaa yksinkertaisilla lisäosilla. Työ on rajattu koskemaan perävaununetuosaan, -sivuosaan ja -pohjaan. Lisäksi rajoitteita asettaa Suomenlainsäädäntö ja EU:n direktiivit. Työssä perehdytään ilmanvastusvoiman syntymekanismeihin raskaankaluston ajoneuvojen kannalta ja käydään läpi merkittävimmät vaikuttavat tekijät ilmanvastusvoimiin sekä kuorma-auton perävaunun eri muotojen ja osien vaikutus. Perävaunuun asetettavien il-manohjaimien vaikutukset ja toiminta selvitetään. Avainasemassa hakeperävaunun aerodynamiikan parantamisessa on ilmavirtauksen estä-minen perävaunun alle sekä etupuolelta. Lisäksi virtausten muuttamien perävaunun takana on hyödyllistä. Merkittävimmät hyödyt saadaan niistä ratkaisuista, jotka estävät ilman virtausta perävau-nun alle sekä renkaisiin. Näitä ratkaisuja olivat sivuhelmat ja pohjan sekä renkaiden kote-lointi. Lisäksi ilmavirtauksen estäminen perävaunun edestä tuotti merkittävää hyötyä. Levy perävaunun välissä tai koko välin peittämällä saadaan aikaan huomattavaa vähenemistä ilmanvastuksessa.