802 resultados para multiple sensor data
Resumo:
Summary Forests are key ecosystems of the earth and associated with a large range of functions. Many of these functions are beneficial to humans and are referred to as ecosystem services. Sustainable development requires that all relevant ecosystem services are quantified, managed and monitored equally. Natural resource management therefore targets the services associated with ecosystems. The main hypothesis of this thesis is that the spatial and temporal domains of relevant services do not correspond to a discrete forest ecosystem. As a consequence, the services are not quantified, managed and monitored in an equal and sustainable manner. The thesis aims were therefore to test this hypothesis, establish an improved conceptual approach and provide spatial applications for the relevant land cover and structure variables. The study was carried out in western Switzerland and based primarily on data from a countrywide landscape inventory. This inventory is part of the third Swiss national forest inventory and assesses continuous landscape variables based on a regular sampling of true colour aerial imagery. In addition, land cover variables were derived from Landsat 5 TM passive sensor data and land structure variables from active sensor data from a small footprint laserscanning system. The results confirmed the main hypothesis, as relevant services did not scale well with the forest ecosystem. Instead, a new conceptual approach for sustainable management of natural resources was described. This concept quantifies the services as a continuous function of the landscape, rather than a discrete function of the forest ecosystem. The explanatory landscape variables are therefore called continuous fields and the forest becomes a dependent and function-driven management unit. Continuous field mapping methods were established for land cover and structure variables. In conclusion, the discrete forest ecosystem is an adequate planning and management unit. However, monitoring the state of and trends in sustainability of services requires them to be quantified as a continuous function of the landscape. Sustainable natural resource management iteratively combines the ecosystem and gradient approaches. Résumé Les forêts sont des écosystèmes-clés de la terre et on leur attribue un grand nombre de fonctions. Beaucoup de ces fonctions sont bénéfiques pour l'homme et sont nommées services écosystémiques. Le développement durable exige que ces services écosystémiques soient tous quantifiés, gérés et surveillés de façon égale. La gestion des ressources naturelles a donc pour cible les services attribués aux écosystèmes. L'hypothèse principale de cette thèse est que les domaines spatiaux et temporels des services attribués à la forêt ne correspondent pas à un écosystème discret. Par conséquent, les services ne sont pas quantifiés, aménagés et surveillés d'une manière équivalente et durable. Les buts de la thèse étaient de tester cette hypothèse, d'établir une nouvelle approche conceptuelle de la gestion des ressources naturelles et de préparer des applications spatiales pour les variables paysagères et structurelles appropriées. L'étude a été menée en Suisse occidentale principalement sur la base d'un inventaire de paysage à l'échelon national. Cet inventaire fait partie du troisième inventaire forestier national suisse et mesure de façon continue des variables paysagères sur la base d'un échantillonnage régulier sur des photos aériennes couleur. En outre, des variables de couverture ? terrestre ont été dérivées des données d'un senseur passif Landsat 5 TM, ainsi que des variables structurelles, dérivées du laserscanning, un senseur actif. Les résultats confirment l'hypothèse principale, car l'échelle des services ne correspond pas à celle de l'écosystème forestier. Au lieu de cela, une nouvelle approche a été élaborée pour la gestion durable des ressources naturelles. Ce concept représente les services comme une fonction continue du paysage, plutôt qu'une fonction discrète de l'écosystème forestier. En conséquence, les variables explicatives de paysage sont dénommées continuous fields et la forêt devient une entité dépendante, définie par la fonction principale du paysage. Des méthodes correspondantes pour la couverture terrestre et la structure ont été élaborées. En conclusion, l'écosystème forestier discret est une unité adéquate pour la planification et la gestion. En revanche, la surveillance de la durabilité de l'état et de son évolution exige que les services soient quantifiés comme fonction continue du paysage. La gestion durable des ressources naturelles joint donc l'approche écosystémique avec celle du gradient de manière itérative.
Resumo:
Diplomityöntavoitteena on tutkia, kuinka nimiketiedon hallinnalla voidaan parantaa kustannustehokkuutta projektiohjautuvassa toimitusketjussa. Työn kohdeyritys on Konecranes Oyj:n tytäryhtiö Konecranes Heavy Lifting Oy. Nimiketiedon hallinta liittyy läheisesti tuotetiedon hallintaan. Teoriaosassa käsitellään toimitusketjuympäristön tekijöitä, modulaarisuuden ja asiakaskohtaisuuden ongelmallisuutta sekä informaatiovirran vaikutuksia eri toiminnoissa. Yritysosassa vertaillaan konsernitason kahta liiketoiminta-aluetta strategiavalintojen, tuotteiden modulaarisuuden sekä tilaus-toimitusprosessissa liikkuvan nimikeinformaation perusteella. Diplomityön tuloksena annetaan suuntaviivat; nimikemassan eheytykseen, strategisten nimikkeiden tunnistamiseen ja määrittämiseen, nimikkeiden hallintaan sekä master-datan sijoittamiseen tietojärjestelmäympäristöön.
Resumo:
Inhibition of the essential chaperone Hsp90 with drugs causes a global perturbation of protein folding and the depletion of direct substrates of Hsp90, also called clients. Ubiquitination and proteasomal degradation play a key role in cellular stress responses, but the impact of Hsp90 inhibition on the ubiquitinome has not been characterized on a global scale. We used stable isotope labeling and antibody-based peptide enrichment to quantify more than 1500 protein sites modified with a Gly-Gly motif, the remnant of ubiquitination, in human T-cells treated with an Hsp90 inhibitor. We observed rapid changes in GlyGly-modification sites, with strong increases for some Hsp90 clients but also decreases for a majority of cellular proteins. A comparison with changes in total protein levels and protein synthesis and decay rates from a previous study revealed a complex picture with different regulatory patterns observed for different protein families. Overall the data support the notion that for Hsp90 clients GlyGly-modification correlates with targeting by the ubiquitin-proteasome system and decay, while for other proteins levels of GlyGly-modification appear to be mainly influenced by their synthesis rates. Therefore a correct interpretation of changes in ubiquitination requires knowledge of multiple parameters. Data are available via ProteomeXchange with identifier PXD001549. BIOLOGICAL SIGNIFICANCE: Proteostasis, i.e. the capacity of the cell to maintain proper synthesis and maturation of proteins, is a fundamental biological process and its perturbations have far-reaching medical implications e.g. in cancer or neurodegenerative diseases. Hsp90 is an essential chaperone responsible for the correct maturation and stability of a number of key proteins. Inhibition of Hsp90 triggers a global stress response caused by accumulation of misfolded chains, which have to be either refolded or eliminated by protein degradation pathways such as the Ubiquitin-Proteasome System (UPS). We present the first global assessment of the changes in the ubiquitinome, the subset of ubiquitin-modified proteins, following Hsp90 inhibition in human T-cells. The results provide clues on how cells respond to a specific proteostasis challenge. Furthermore, our data also suggest that basal ubiquitination levels for most proteins are influenced by synthesis rates. This has broad significance as it implies that a proper interpretation of data on ubiquitination levels necessitates simultaneous knowledge of other parameters.
Resumo:
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments.
Resumo:
Monet teollisuuden konenäkö- ja hahmontunnistusongelmat ovat hyvin samantapaisia, jolloin prototyyppisovelluksia suunniteltaessa voitaisiin hyödyntää pitkälti samoja komponentteja. Oliopohjaiset sovelluskehykset tarjoavat erinomaisen tavan nopeuttaa ohjelmistokehitystä uudelleenkäytettävyyttä parantamalla. Näin voidaan sekä mahdollistaa konenäkösovellusten laajempi käyttö että säästää kustannuksissa. Tässä työssä esitellään konenäkösovelluskehys, joka on perusarkkitehtuuriltaan liukuhihnamainen. Ylätason rakenne koostuu sensorista, datankäsittelyoperaatioista, piirreirrottimesta sekä luokittimesta. Itse sovelluskehyksen lisäksi on toteutettu joukko kuvankäsittely- ja hahmontunnistusoperaatioita. Sovelluskehys nopeuttaa selvästi ohjelmointityötä ja helpottaa uusien kuvankäsittelyoperaatioiden lisää mistä.
Resumo:
ABSTRACT One of the most relevant activities of Brazilian economy is agriculture. Among the main crops in Brazil, rice is one of high relevance. The state of Rio Grande do Sul, in Southern Brazil, is responsible for 68.7% of domestic production (IBGE, 2013). The goal of this study was to develop a low-cost methodology with a regional scope to identify suitable areas for irrigated rice cropping in this state, using spectro-temporal behavior of vegetation index by means of MODIS images and HAND model. The rice-cropped area of this study was the southern half of the State. Using the HAND model, flood areas were mapped to identify irrigated rice cultivation. We used multi-temporal images of vegetation index from MODIS sensor, covering the period from August 2001 to May 2012. To assess the results, we used data collected in the fields and cropped area information from IBGE. The results showed that the proposed methodology was satisfactory, with Kappa 0.92 and global accuracy of 98.18%. As result, MODIS sensor data and flood areas delineation by means of HAND model generated the estimate irrigated rice area for the area of study.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
In this study, an infrared thermography based sensor was studied with regard to usability and the accuracy of sensor data as a weld penetration signal in gas metal arc welding. The object of the study was to evaluate a specific sensor type which measures thermography from solidified weld surface. The purpose of the study was to provide expert data for developing a sensor system in adaptive metal active gas (MAG) welding. Welding experiments with considered process variables and recorded thermal profiles were saved to a database for further analysis. To perform the analysis within a reasonable amount of experiments, the process parameter variables were gradually altered by at least 10 %. Later, the effects of process variables on weld penetration and thermography itself were considered. SFS-EN ISO 5817 standard (2014) was applied for classifying the quality of the experiments. As a final step, a neural network was taught based on the experiments. The experiments show that the studied thermography sensor and the neural network can be used for controlling full penetration though they have minor limitations, which are presented in results and discussion. The results are consistent with previous studies and experiments found in the literature.
Resumo:
Internet of Things or IoT is revolutionizing the world we are living in, similarly the way Internet and the web did few decades ago. It is changing how we interact with the things surrounding us. Electronic health and remote patient monitoring are the ways of utilizing these technological improvements towards the healthcare. There are many applications of IoT in eHealth such as, it will open the gate to provide healthcare to the remote areas of the world, where healthcare through traditional hospital systems cannot be provided. To connect these new eHealth IoT systems with the existing healthcare information systems, we can use the existing interoperability standards commonly used in healthcare information systems. In this thesis we implemented an eHealth IoT system based on Health Level 7 interoperability standard for continuous data transmission. There is not much previous work done in implementing the HL7 for continuous sensor data transmission. Some of the previous work was limited to sensors which are not continuous in nature and some of it is only theatrical architecture. This thesis aims to prove that it is possible to implement an eHealth IoT system by using sensors which require continues data transmission, such as respiratory sensors, and to connect it with the existing eHealth information system semantically by using HL7 interoperability standard. This system will be beneficial in implementing eHealth IoT systems for those patients, who requires continuous healthcare personal monitoring. This includes elderly people and patients, whose health need to be monitored constantly. To implement the architecture, HL7 v2.5 is selected due to its ease of implementation and low size. We selected some open source technologies because of their open licenses and large developer community. We will also review the most efficient technology available in every layer of eHealth IoT system and will propose an efficient system.
Resumo:
This thesis has discussed the development of a new metal ion doped panchromatic photopolymer for various holographic applications. High-quality panchromatic holographic recording material with high diffraction efficiency, high photosensitivity and high spatial resolution is one of the key factors for the successful recording of true colour holograms. The capability of the developed material for multicolour holography can be investigated.In the present work, multiplexing studies were carried out using He-Ne laser (632.8 nm). Multiplexing can be done using low wavelength lasers like Ar+ ion (488 nm) and frequency doubled Nd: YAG (532 nm) lasers, so as to increase the storage capacity. The photopolymer film studied had a thickness of only 130 Cm. Films with high thickness (~500 Cm) is highly essential for competitive holographic memories . Hence films with high thickness can be fabricated and efforts can be made to record more holograms or gratings in the material.In the present study, attempts were made to record data page in silver doped MBPVA/AA photopolymer film. Image of a checkerboard pattern was recorded in the film, which could be reconstructed with good image fidelity. Efforts can be made to determine the bit error rate (BER) which provides a quantitative measure of the image quality of the reconstructed image . Multiple holographic data pages can also be recorded in the material making use of different multiplexing techniques.Holographic optical elements (HOEs) are widely used in optical sensors, optical information processing, fibre optics, optical scanners and solar concentrators . The suitability of the developed film for recording holographic optical elements like lenses, beam splitters and filters can be studied.The suitability of a reflection hologram recorded in acrylamide based photopolymer for visual indication of environmental humidity is reported . Studies can be done to optimize the film composition for recording of reflection holograms.An improvement in the spatial resolution of PVA/acrylamide based photopolymer by using a low molecular-weight poly (vinyl alcohol) binder was recently reported . Effect of the molecular weight of the binder matrix on the holographic properties of the developed photopolymer system can be investigated.Incorporation of nanoparticles into photopolymer system is reported to enhance the resolution and improve the dimensional stability of the system . Hence efforts can be made to incorporate silver nanoparticles into the photopolymer and its influence on the holographic properties can be studied.This thesis was a small venture towards the realization of a big goal, a competent holographic recording material with excellent properties for practical holographic applications. As a result of the present research, we could successfully develop an efficient panchromatic photopolymer system and could demonstrate its suitability for recording transmission holograms and holographic data page. The developed photopolymer system is expected to have significant applications in the fields of true-color display holography, wavelength multiplexing holographic storage, and holographic optical elements. Highly concentrated and determined effort has yet to be put forth for this expectation to become a reality.
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
This thesis develops an approach to the construction of multidimensional stochastic models for intelligent systems exploring an underwater environment. It describes methods for building models by a three- dimensional spatial decomposition of stochastic, multisensor feature vectors. New sensor information is incrementally incorporated into the model by stochastic backprojection. Error and ambiguity are explicitly accounted for by blurring a spatial projection of remote sensor data before incorporation. The stochastic models can be used to derive surface maps or other representations of the environment. The methods are demonstrated on data sets from multibeam bathymetric surveying, towed sidescan bathymetry, towed sidescan acoustic imagery, and high-resolution scanning sonar aboard a remotely operated vehicle.
Resumo:
Fine-grained parallel machines have the potential for very high speed computation. To program massively-concurrent MIMD machines, programmers need tools for managing complexity. These tools should not restrict program concurrency. Concurrent Aggregates (CA) provides multiple-access data abstraction tools, Aggregates, which can be used to implement abstractions with virtually unlimited potential for concurrency. Such tools allow programmers to modularize programs without reducing concurrency. I describe the design, motivation, implementation and evaluation of Concurrent Aggregates. CA has been used to construct a number of application programs. Multi-access data abstractions are found to be useful in constructing highly concurrent programs.
Resumo:
This report explores methods for determining the pose of a grasped object using only limited sensor information. The problem of pose determination is to find the position of an object relative to the hand. The information is useful when grasped objects are being manipulated. The problem is hard because of the large space of grasp configurations and the large amount of uncertainty inherent in dexterous hand control. By studying limited sensing approaches, the problem's inherent constraints can be better understood. This understanding helps to show how additional sensor data can be used to make recognition methods more effective and robust.
Resumo:
El protocolo SOS (Sensor Observation Service) es una especificación OGC dentro de la iniciativa Sensor Web Enablement (SWE), que permite acceder a las observaciones y datos de sensores heterogéneos de una manera estándar. En el proyecto gvSIG se ha abierto una línea de investigación entorno a la SWE, existiendo en la actualidad dos prototipos de clientes SOS para gvSIG y gvSIG Mobile. La especificación utilizada para describir las medidas proporcionadas por sensores es Observation & Measurement (O&M) y la descripción de los metadatos de los sensores (localización. ID, fenómenos medidos, procesamiento de los datos, etc) se obtiene a partir del esquema Sensor ML. Se ha implementado el siguiente conjunto de operaciones: GetCapabilities para la descripción del servicio; DescribeSensor para acceder a los metadatos del sensor y el GetObservation para recibir las observaciones. En el caso del prototipo para gvSIG escritorio se puede acceder a los datos procedentes de los distintos grupos de sensores “offerings” añadiéndolos en el mapa como nuevas capas. Los procedimientos o sensores que están incluidos en un “offering” son presentados como elementos de la capa que se pueden cartografiar en el mapa. Se puede acceder a las observaciones (GetObservation) de estos sensores filtrando los datos por intervalo de tiempo y propiedad del fenómeno observado. La información puede ser representada sobre el mapa mediante gráficas para una mejor comprensión con la posibilidad de comparar datos de distintos sensores. En el caso del prototipo para el cliente móvil gvSIG Mobile, se ha utilizado la misma filosofía que para el cliente de escritorio, siendo cada “offering” una nueva capa. Las observaciones de los sensores pueden ser visualizadas en la pantalla del dispositivo móvil y se pueden obtener mapas temáticos,con el objetivo de facilitar la interpretación de los datos