946 resultados para Application techniques


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although visualization in the field of dentistry has some of the same requirements as the medicine field, the differences in goal demand specific approaches. This paper reports on the implementation of two fundamentally different approaches to reconstruction of structures from planar cross sections and their application to dentistry data. One of the approaches was an implementation of a distance-based sampling technique, and the other is a new algorithm, based on the Delaunay triangulation. Both were tested using contour data of teeth and the results are compared here in the light of the target applications, which are teaching and training dentistry, as well as simulation of dental procedures and illnesses. Widely mentioned problems encountered in local reconstruction methods such as marching cubes for these cases are clearly illustrated in this paper, and a very satisfactory alternative is given. © 2000 SPIE and IS&T.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Polymerase chain reaction techniques were developed and applied to identify DNA from .40 species of prey contained in fecal (scat) soft-part matrix collected at terrestrial sites used by Steller sea lions (Eumetopias jubatus) in British Columbia and the eastern Aleutian Islands, Alaska. Sixty percent more fish and cephalopod prey were identified by morphological analyses of hard parts compared with DNA analysis of soft parts (hard parts identified higher relative proportions of Ammodytes sp., Cottidae, and certain Gadidae). DNA identified 213 prey occurrences, of which 75 (35%) were undetected by hard parts (mainly Salmonidae, Pleuronectidae, Elasmobranchii, and Cephalopoda), and thereby increased species occurrences by 22% overall and species richness in 44% of cases (when comparing 110 scats that amplified prey DNA). Prey composition was identical within only 20% of scats. Overall, diet composition derived from both identification techniques combined did not differ significantly from hard-part identification alone, suggesting that past scat-based diet studies have not missed major dietary components. However, significant differences in relative diet contributions across scats (as identified using the two techniques separately) reflect passage rate differences between hard and soft digesta material and highlight certain hypothesized limitations in conventional morphological-based methods (e.g., differences in resistance to digestion, hard part regurgitation, partial and secondary prey consumption), as well as potential technical issues (e.g., resolution of primer efficiency and sensitivity and scat subsampling protocols). DNA analysis of salmon occurrence (from scat soft-part matrix and 238 archived salmon hard parts) provided species-level taxonomic resolution that could not be obtained by morphological identification and showed that Steller sea lions were primarily consuming pink (Oncorhynchus gorbuscha) and chum (Oncorhynchus keta) salmon. Notably, DNA from Atlantic salmon (Salmo salar) that likely originated from a distant fish farm was also detected in two scats from one site in the eastern Aleutian Islands. Overall, molecular techniques are valuable for identifying prey in the fecal remains of marine predators. Combining DNA and hard-part identification will effectively alleviate certain predicted biases and will ultimately enhance measures of diet richness, fisheries interactions (especially salmon-related ones), and the ecological role of pinnipeds and other marine predators, to the benefit of marine wildlife conservationists and fisheries managers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lipolysis and oxidation of lipids in foods are the major biochemical and chemical processes that cause food quality deterioration, leading to the characteristic, unpalatable odour and flavour called rancidity. In addition to unpalatability, rancidity may give rise to toxic levels of certain compounds like aldehydes, hydroperoxides, epoxides and cholesterol oxidation products. In this PhD study chromatographic and spectroscopic techniques were employed to determine the degree of rancidity in different animal products and its relationship with technological parameters like feeding fat sources, packaging, processing and storage conditions. To achieve this goal capillary gas chromatography (CGC) was employed not only to determine the fatty acids profile but also, after solid phase extraction, the amount of free fatty acids (FFA), diglycerides (DG), sterols (cholesterol and phytosterols) and cholesterol oxidation products (COPs). To determine hydroperoxides, primary products of oxidation and quantify secondary products UV/VIS absorbance spectroscopy was applied. Most of the foods analysed in this study were meat products. In actual fact, lipid oxidation is a major deterioration reaction in meat and meat products and results in adverse changes in the colour, flavour and texture of meat. The development of rancidity has long recognized as a serious problem during meat handling, storage and processing. On a dairy product, a vegetal cream, a study of lipid fraction and development of rancidity during storage was carried out to evaluate its shelf-life and some nutritional features life saturated/unsaturated fatty acids ratio and phytosterols content. Then, according to the interest that has been growing around functional food in the last years, a new electrophoretic method was optimized and compared with HPLC to check the quality of a beehive product like royal jelly. This manuscript reports the main results obtained in the five activities briefly summarized as follows: 1) comparison between HPLC and a new electrophoretic method in the evaluation of authenticity of royal jelly; 2) study of the lipid fraction of a vegetal cream under different storage conditions; 3) study of lipid oxidation in minced beef during storage under a modified atmosphere packaging, before and after cooking; 4) evaluation of the influence of dietary fat and processing on the lipid fraction of chicken patties; 5) study of the lipid fraction of typical Italian and Spanish pork dry sausages and cured hams.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This PhD thesis describes the application of some instrumental analytical techniques suitable to the study of fundamental food products for the human diet, such as: extra virgin olive oil and dairy products. These products, widely spread in the market and with high nutritional values, are increasingly recognized healthy properties although their lipid fraction might contain some unfavorable components to the human health. The research activity has been structured in the following investigations: “Comparison of different techniques for trans fatty acids analysis” “Fatty acids analysis of outcrop milk cream samples, with particular emphasis on the content of Conjugated Linoleic Acid (CLA) and trans Fatty Acids (TFA), by using 100m high-polarity capillary column” “Evaluation of the oxidited fatty acids (OFA) content during the Parmigiano-Reggiano cheese seasoning” “Direct analysis of 4-desmethyl sterols and two dihydroxy triterpenes in saponified vegetal oils (olive oil and others) using liquid chromatography-mass spectrometry” “Quantitation of long chain poly-unsatured fatty acids (LC-PUFA) in base infant formulas by Gas Chromatography, and evaluation of the blending phases accuracy during their preparation” “Fatty acids composition of Parmigiano Reggiano cheese samples, with emphasis on trans isomers (TFA)”

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Satellite remote sensing has proved to be an effective support in timely detection and monitoring of marine oil pollution, mainly due to illegal ship discharges. In this context, we have developed a new methodology and technique for optical oil spill detection, which make use of MODIS L2 and MERIS L1B satellite top of atmosphere (TOA) reflectance imagery, for the first time in a highly automated way. The main idea was combining wide swaths and short revisit times of optical sensors with SAR observations, generally used in oil spill monitoring. This arises from the necessity to overcome the SAR reduced coverage and long revisit time of the monitoring area. This can be done now, given the MODIS and MERIS higher spatial resolution with respect to older sensors (250-300 m vs. 1 km), which consents the identification of smaller spills deriving from illicit discharge at sea. The procedure to obtain identifiable spills in optical reflectance images involves removal of oceanic and atmospheric natural variability, in order to enhance oil-water contrast; image clustering, which purpose is to segment the oil spill eventually presents in the image; finally, the application of a set of criteria for the elimination of those features which look like spills (look-alikes). The final result is a classification of oil spill candidate regions by means of a score based on the above criteria.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a comparison of principal component (PC) regression and regularized expectation maximization (RegEM) to reconstruct European summer and winter surface air temperature over the past millennium. Reconstruction is performed within a surrogate climate using the National Center for Atmospheric Research (NCAR) Climate System Model (CSM) 1.4 and the climate model ECHO-G 4, assuming different white and red noise scenarios to define the distortion of pseudoproxy series. We show how sensitivity tests lead to valuable “a priori” information that provides a basis for improving real world proxy reconstructions. Our results emphasize the need to carefully test and evaluate reconstruction techniques with respect to the temporal resolution and the spatial scale they are applied to. Furthermore, we demonstrate that uncertainties inherent to the predictand and predictor data have to be more rigorously taken into account. The comparison of the two statistical techniques, in the specific experimental setting presented here, indicates that more skilful results are achieved with RegEM as low frequency variability is better preserved. We further detect seasonal differences in reconstruction skill for the continental scale, as e.g. the target temperature average is more adequately reconstructed for summer than for winter. For the specific predictor network given in this paper, both techniques underestimate the target temperature variations to an increasing extent as more noise is added to the signal, albeit RegEM less than with PC regression. We conclude that climate field reconstruction techniques can be improved and need to be further optimized in future applications.