442 resultados para storing
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
Lahopuun määrästä ja sijoittumisesta ollaan kiinnostuneita paitsi elinympäristöjen monimuotoisuuden, myös ilmakehän hiilen varastoinnin kannalta. Tutkimuksen tavoitteena oli kehittää aluepohjainen laserkeilausdataa hyödyntävä malli lahopuukohteiden paikantamiseksi ja lahopuun määrän estimoimiseksi. Samalla tutkittiin mallin selityskyvyn muuttumista mallinnettavan ruudun kokoa suurennettaessa. Tutkimusalue sijaitsi Itä-Suomessa Sonkajärvellä ja koostui pääasiassa nuorista hoidetuista talousmetsistä. Tutkimuksessa käytettiin harvapulssista laserkeilausdataa sekä kaistoittain mitattua maastodataa kuolleesta puuaineksesta. Aineisto jaettiin siten, että neljäsosa datasta oli käytössä mallinnusta varten ja loput varattiin valmiiden mallien testaamiseen. Lahopuun mallintamisessa käytettiin sekä parametrista että ei-parametrista mallinnusmenetelmää. Logistisen regression avulla erikokoisille (0,04, 0,20, 0,32, 0,52 ja 1,00 ha) ruuduille ennustettiin todennäköisyys lahopuun esiintymiselle. Muodostettujen mallien selittävät muuttujat valittiin 80 laserpiirteen ja näiden muunnoksien joukosta. Mallien selittävät muuttujat valittiin kolmessa vaiheessa. Aluksi muuttujia tarkasteltiin visuaalisesti kuvaamalla ne lahopuumäärän suhteen. Ensimmäisessä vaiheessa sopivimmiksi arvioitujen muuttujien selityskykyä testattiin mallinnuksen toisessa vaiheessa yhden muuttujan mallien avulla. Lopullisessa usean muuttujan mallissa selittävien muuttujien kriteerinä oli tilastollinen merkitsevyys 5 % riskitasolla. 0,20 hehtaarin ruutukoolle luotu malli parametrisoitiin muun kokoisille ruuduille. Logistisella regressiolla toteutetun parametrisen mallintamisen lisäksi, 0,04 ja 1,0 hehtaarin ruutukokojen aineistot luokiteltiin ei-parametrisen CART-mallinnuksen (Classification and Regression Trees) avulla. CARTmenetelmällä etsittiin aineistosta vaikeasti havaittavia epälineaarisia riippuvuuksia laserpiirteiden ja lahopuumäärän välillä. CART-luokittelu tehtiin sekä lahopuustoisuuden että lahopuutilavuuden suhteen. CART-luokituksella päästiin logistista regressiota parempiin tuloksiin ruutujen luokituksessa lahopuustoisuuden suhteen. Logistisella mallilla tehty luokitus parani ruutukoon suurentuessa 0,04 ha:sta(kappa 0,19) 0,32 ha:iin asti (kappa 0,38). 0,52 ha:n ruutukoolla luokituksen kappa-arvo kääntyi laskuun (kappa 0,32) ja laski edelleen hehtaarin ruutukokoon saakka (kappa 0,26). CART-luokitus parani ruutukoon kasvaessa. Luokitustulokset olivat logistista mallinnusta parempia sekä 0,04 ha:n (kappa 0,24) että 1,0 ha:n (kappa 0,52) ruutukoolla. CART-malleilla määritettyjen ruutukohtaisten lahopuutilavuuksien suhteellinen RMSE pieneni ruutukoon kasvaessa. 0,04 hehtaarin ruutukoolla koko aineiston lahopuumäärän suhteellinen RMSE oli 197,1 %, kun hehtaarin ruutukoolla vastaava luku oli 120,3 %. Tämän tutkimuksen tulosten perusteella voidaan todeta, että maastossa mitatun lahopuumäärän ja tutkimuksessa käytettyjen laserpiirteiden yhteys on pienellä ruutukoolla hyvin heikko, mutta vahvistuu hieman ruutukoon kasvaessa. Kun mallinnuksessa käytetty ruutukoko kasvaa, pienialaisten lahopuukeskittymien havaitseminen kuitenkin vaikeutuu. Tutkimuksessa kohteen lahopuustoisuus pystyttiin kartoittamaan kohtuullisesti suurella ruutukoolla, mutta pienialaisten kohteiden kartoittaminen ei onnistunut käytetyillä menetelmillä. Pienialaisten kohteiden paikantaminen laserkeilauksen avulla edellyttää jatkotutkimusta erityisesti tiheäpulssisen laserdatan käytöstä lahopuuinventoinneissa.
Resumo:
The main objectives in this thesis were to isolate and identify the phenolic compounds in wild (Sorbus aucuparia) and cultivated rowanberries, European cranberries (Vaccinium microcarpon), lingonberries (Vaccinium vitis-idaea), and cloudberries (Rubus chamaemorus), as well as to investigate the antioxidant activity of phenolics occurring in berries in food oxidation models. In addition, the storage stability of cloudberry ellagitannin isolate was studied. In wild and cultivated rowanberries, the main phenolic compounds were chlorogenic acids and neochlorogenic acids with increasing anthocyanin content depending on the crossing partners. The proanthocyanidin contents of cranberries and lingonberries were investigated, revealing that the lingonberry contained more rare A-type dimers than the European cranberry. The liquid chromatography mass spectrometry (LC-MS) analysis of cloudberry ellagitannins showed that trimeric lambertianin C and sanguiin H-10 were the main ellagitannins. The berries, rich in different types of phenolic compounds including hydroxycinnamic acids, proanthocyanidins, and ellagitannins, showed antioxidant activity toward lipid oxidation in liposome and emulsion oxidation models. All the different rowanberry cultivars prevented lipid oxidation in the same way, in spite of the differences in their phenolic composition. In terms of liposomes, rowanberries were slightly more effective antioxidants than cranberry and lingonberry phenolics. Greater differences were found when comparing proanthocyanidin fractions. Proanthocyanidin dimers and trimers of both cranberries and lingonberries were most potent in inhibiting lipid oxidation. Antioxidant activities and antiradical capacities were also studied with hydroxycinnamic acid glycosides. The sinapic acid derivatives of the hydroxycinnamic acid glycosides were the most effective at preventing lipid oxidation in emulsions and liposomes and scavenging radicals in DPPH assay. In liposomes and emulsions, the formation of the secondary oxidation product, hexanal, was inhibited more than that of the primary oxidation product, conjugated diene hydroperoxides, by hydroxycinnamic acid derivatives. This indicates that they are principally chain-breaking antioxidants rather than metal chelators, although they possess chelating activity as well. The storage stability test of cloudberry ellagitannins was performed by storing ellagitannin isolate and ellagitannins encapsulated with maltodextrin at different relative vapor pressures. The storage stability was enhanced by the encapsulation when higher molecular weight maltodextrin was used. The best preservation was achieved when the capsules were stored at 0 or 33% relative vapor pressures. In addition, the antioxidant activities of encapsulated cloudberry extracts were followed during the storage period. Different storage conditions did not alter the antioxidant activity, even though changes in the ellagitannin contents were seen. The current results may be of use in improving the oxidative stability of food products by using berries as natural antioxidants.
Resumo:
Homomorphic analysis and pole-zero modeling of electrocardiogram (ECG) signals are presented in this paper. Four typical ECG signals are considered and deconvolved into their minimum and maximum phase components through cepstral filtering, with a view to study the possibility of more efficient feature selection from the component signals for diagnostic purposes. The complex cepstra of the signals are linearly filtered to extract the basic wavelet and the excitation function. The ECG signals are, in general, mixed phase and hence, exponential weighting is done to aid deconvolution of the signals. The basic wavelet for normal ECG approximates the action potential of the muscle fiber of the heart and the excitation function corresponds to the excitation pattern of the heart muscles during a cardiac cycle. The ECG signals and their components are pole-zero modeled and the pole-zero pattern of the models can give a clue to classify the normal and abnormal signals. Besides, storing only the parameters of the model can result in a data reduction of more than 3:1 for normal signals sampled at a moderate 128 samples/s
Resumo:
The purpose of this study was to examine the integrated climatic impacts of forestry and the use fibre-based packaging materials. The responsible use of forest resources plays an integral role in mitigating climate change. Forests offer three generic mitigation strategies; conservation, sequestration and substitution. By conserving carbon reservoirs, increasing the carbon sequestration in the forest or substituting fossil fuel intensive materials and energy, it is possible to lower the amount of carbon in the atmosphere through the use of forest resources. The Finnish forest industry consumed some 78 million m3 of wood in 2009, while total of 2.4 million tons of different packaging materials were consumed that same year in Finland. Nearly half of the domestically consumed packaging materials were wood-based. Globally the world packaging material market is valued worth annually some €400 billion, of which the fibre-based packaging materials account for 40 %. The methodology and the theoretical framework of this study are based on a stand-level, steady-state analysis of forestry and wood yields. The forest stand data used for this study were obtained from Metla, and consisted of 14 forest stands located in Southern and Central Finland. The forest growth and wood yields were first optimized with the help of Stand Management Assistant software, and then simulated in Motti for forest carbon pools. The basic idea was to examine the climatic impacts of fibre-based packaging material production and consumption through different forest management and end-use scenarios. Economically optimal forest management practices were chosen as the baseline (1) for the study. In the alternative scenarios, the amount of fibre-based packaging material on the market decreased from the baseline. The reduced pulpwood demand (RPD) scenario (2) follows economically optimal management practices under reduced pulpwood price conditions, while the sawlog scenario (3) also changed the product mix from packaging to sawnwood products. The energy scenario (4) examines the impacts of pulpwood demand shift from packaging to energy use. The final scenario follows the silvicultural guidelines developed by the Forestry Development Centre Tapio (5). The baseline forest and forest product carbon pools and the avoided emissions from wood use were compared to those under alternative forest management regimes and end-use scenarios. The comparison of the climatic impacts between scenarios gave an insight into the sustainability of fibre-based packaging materials, and the impacts of decreased material supply and substitution. The results show that the use of wood for fibre-based packaging purposes is favorable, when considering climate change mitigation aspects of forestry and wood use. Fibre-based packaging materials efficiently displace fossil carbon emissions by substituting more energy intensive materials, and they delay biogenic carbon re-emissions to the atmosphere for several months up to years. The RPD and the sawlog scenarios both fared well in the scenario comparison. These scenarios produced relatively more sawnwood, which can displace high amounts of emissions and has high carbon storing potential due to the long lifecycle. The results indicate the possibility that win-win scenarios exist by shifting production from pulpwood to sawlogs; on some of the stands in the RPD and sawlog scenarios, both carbon pools and avoided emissions increased from the baseline simultaneously. On the opposite, the shift from packaging material to energy use caused the carbon pools and the avoided emissions to diminish from the baseline. Hence the use of virgin fibres for energy purposes, rather than forest industry feedstock biomass, should be critically judged if optional to each other. Managing the stands according to the silvicultural guidelines developed by the Forestry Development Centre Tapio provided the least climatic benefits, showing considerably lower carbon pools and avoided emissions. This seems interesting and worth noting, as the guidelines are the current basis for the forest management practices in Finland.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In the distributed storage coding problem we consider, data is stored across n nodes in a network, each capable of storing � symbols. It is required that the complete data can be reconstructed by downloading data from any k nodes. There is also the key additional requirement that a failed node be regenerated by connecting to any d nodes and downloading �symbols from each of them. Our goal is to minimize the repair bandwidth d�. In this paper we provide explicit constructions for several parameter sets of interest.
Resumo:
The rapidly depleting petroleum feed stocks and increasing green house gas emissions around the world has necessitated a search for alternative renewable energy sources. Hydrogen with molecular weight of 2.016 g/mol and high chemical energy per mass equal to 142 MJ/kg has clearly emerged as an alternative to hydrocarbon fuels. Means for safe and cost effective storage are needed for widespread usage of hydrogen as a fuel.Chemical storage is the one of the safer ways to store hydrogen compared to compressed and liquefied hydrogen. It involves storing hydrogen in chemical bonds in molecules and materials where an on-board reaction is used to release hydrogen. Ammonia–borane, (AB,H3N·BH3) with a potential capacity of 19.6 wt% is considered a very promising solid state hydrogen storage material. It is thermally stable at ambient temperatures. There are two major routes for the generation of H2 from AB: catalytic hydrolysis/alcoholysis and catalytic thermal decomposition. There has been a flurry of research activity on the generation of H2 from AB recently. The present review deals with an overview of our efforts in developing cost-effective nanocatalysts for hydrogen generation from ammonia borane in protic solvents.
Resumo:
There are several ways of storing electrical energy in chemical and physical forms and retrieving it on demand, and ultracapacitors are one among them. This article presents the taxonomy of ultracapacitor and describes various types of rechargeable-battery electrodes that can be used to realize the hybrid ultracapacitors in conjunction with a high-surface-area-graphitic-carbon electrode. While the electrical energy is stored in a battery electrode in chemical form, it is stored in physical form as charge in the electrical double-layer formed between the electrolyte and the high-surface-area-carbon electrodes. This article discusses various types of hybrid ultracapacitors along with the possible applications.
Resumo:
A review of various contributions of first principles calculations in the area of hydrogen storage, particularly for the carbon-based sorption materials, is presented. Carbon-based sorption materials are considered as promising hydrogen storage media due to their light weight and large surface area. Depending upon the hybridization state of carbon, these materials can bind the hydrogen via various mechanisms, including physisorption, Kubas and chemical bonding. While attractive binding energy range of Kubas bonding has led to design of several promising storage systems, in reality the experiments remain very few due to materials design challenges that are yet to be overcome. Finally, we will discuss the spillover process, which deals with the catalytic chemisorption of hydrogen, and arguably is the most promising approach for reversibly storing hydrogen under ambient conditions.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.