858 resultados para data pre-processing
Resumo:
Numerical simulations of eye globes often rely on topographies that have been measured in vivo using devices such as the Pentacam or OCT. The topographies, which represent the form of the already stressed eye under the existing intraocular pressure, introduce approximations in the analysis. The accuracy of the simulations could be improved if either the stress state of the eye under the effect of intraocular pressure is determined, or the stress-free form of the eye estimated prior to conducting the analysis. This study reviews earlier attempts to address this problem and assesses the performance of an iterative technique proposed by Pandolfi and Holzapfel [1], which is both simple to implement and promises high accuracy in estimating the eye's stress-free form. A parametric study has been conducted and demonstrated reliance of the error level on the level of flexibility of the eye model, especially in the cornea region. However, in all cases considered 3-4 analysis iterations were sufficient to produce a stress-free form with average errors in node location <10(-6)mm and a maximal error <10(-4)mm. This error level, which is similar to what has been achieved with other methods and orders of magnitude lower than the accuracy of current clinical topography systems, justifies the use of the technique as a pre-processing step in ocular numerical simulations.
Resumo:
The main purpose of this project is to understand the process of engine simulation using the open source CFD code called KIVA. This report mainly discusses the simulation of the 4-valve Pentroof engine through KIVA 3VR2. KIVA is an open source FORTRAN code which is used to solve the fluid flow field in the engines with the transient 2D and 3D chemically reactive flow with spray. It also focuses on the complete procedure to simulate an engine cycle starting from pre- processing until the final results. This report will serve a handbook for the using the KIVA code.
Resumo:
The feasibility of carbon sequestration in cement kiln dust (CKD) was investigated in a series of batch and column experiments conducted under ambient temperature and pressure conditions. The significance of this work is the demonstration that alkaline wastes, such as CKD, are highly reactive with carbon dioxide (CO2). In the presence of water, CKD can sequester greater than 80% of its theoretical capacity for carbon without any amendments or modifications to the waste. Other mineral carbonation technologies for carbon sequestration rely on the use of mined mineral feedstocks as the source of oxides. The mining, pre-processing and reaction conditions needed to create favorable carbonation kinetics all require significant additions of energy to the system. Therefore, their actual net reduction in CO2 is uncertain. Many suitable alkaline wastes are produced at sites that also generate significant quantities of CO2. While independently, the reduction in CO2 emissions from mineral carbonation in CKD is small (~13% of process related emissions), when this technology is applied to similar wastes of other industries, the collective net reduction in emissions may be significant. The technical investigations presented in this dissertation progress from proof of feasibility through examination of the extent of sequestration in core samples taken from an aged CKD waste pile, to more fundamental batch and microscopy studies which analyze the rates and mechanisms controlling mineral carbonation reactions in a variety of fresh CKD types. Finally, the scale of the system was increased to assess the sequestration efficiency under more pilot or field-scale conditions and to clarify the importance of particle-scale processes under more dynamic (flowing gas) conditions. A comprehensive set of material characterization methods, including thermal analysis, Xray diffraction, and X-ray fluorescence, were used to confirm extents of carbonation and to better elucidate those compositional factors controlling the reactions. The results of these studies show that the rate of carbonation in CKD is controlled by the extent of carbonation. With increased degrees of conversion, particle-scale processes such as intraparticle diffusion and CaCO3 micropore precipitation patterns begin to limit the rate and possibly the extent of the reactions. Rates may also be influenced by the nature of the oxides participating in the reaction, slowing when the free or unbound oxides are consumed and reaction conditions shift towards the consumption of less reactive Ca species. While microscale processes and composition affects appear to be important at later times, the overall degrees of carbonation observed in the wastes were significant (> 80%), a majority of which occurs within the first 2 days of reaction. Under the operational conditions applied in this study, the degree of carbonation in CKD achieved in column-scale systems was comparable to those observed under ideal batch conditions. In addition, the similarity in sequestration performance among several different CKD waste types indicates that, aside from available oxide content, no compositional factors significantly hinder the ability of the waste to sequester CO2.
Resumo:
OBJECTIVE: Many patients use the Internet to obtain health-related information. It is assumed that health-related Internet information (HRII) will change the consultation practice of physicians. This article explores the strategies, benefits and difficulties from the patients' and physicians' perspective. METHODS: Semi-structured interviews were conducted independently with 32 patients and 20 physicians. Data collection, processing and analysis followed the core principles of Grounded Theory. RESULTS: Patients experienced difficulties in the interpretation of the personal relevance and the meaning of HRII. Therefore they relied on their physicians' interpretation and contextualisation of this information. Discussing patients' concerns and answering patients' questions were important elements of successful consultations with Internet-informed patients to achieve clarity, orientation and certainty. Discussing HRII with patients was appreciated by most of the physicians but misleading interpretations by patients and contrary views compared to physicians caused conflicts during consultations. CONCLUSION: HRII is a valuable source of knowledge for an increasing number of patients. Patients use the consultation to increase their understanding of health and illness. Determinants such as a patient-centred consultation and timely resources are decisive for a successful, empowering consultation with Internet-informed patients. PRACTICAL IMPLICATIONS: If HRII is routinely integrated in the anamnestic interview as a new source of knowledge, the Internet can be used as a link between physicians' expertise and patient knowledge. The critical appraisal of HRII during the consultation is becoming a new field of work for physicians.
Resumo:
Beim Laser-Sintern wird das Pulverbett durch Heizstrahler vorgeheizt, um an der Pulveroberfläche eine Temperatur knapp unterhalb des Materialschmelzpunktes zu erzielen. Dabei soll die Temperaturverteilung auf der Oberfläche möglichst homogen sein, um gleiche Bauteileigenschaften im gesamten Bauraum zu erzielen und den Bauteilverzug gering zu halten. Erfahrungen zeigen jedoch sehr inhomogene Temperaturverteilungen, weshalb oftmals die Integration von neuen oder optimierten Prozessüberwachungssystemen in die Anlagen gefordert wird. Ein potentiell einsetzbares System sind Thermographiekameras, welche die flächige Aufnahme von Oberflächentemperaturen und somit Aussagen über die Temperaturen an der Pulverbettoberfläche erlauben. Dadurch lassen sich kalte Bereiche auf der Oberfläche identifizieren und bei der Prozessvorbereitung berücksichtigen. Gleichzeitig ermöglicht die Thermografie eine Beobachtung der Temperaturen beim Lasereingriff und somit das Ableiten von Zusammenhängen zwischen Prozessparametern und Schmelzetemperaturen. Im Rahmen der durchgeführten Untersuchungen wurde ein IR-Kamerasystem erfolgreich als Festeinbau in eine Laser-Sinteranlage integriert und Lösungen für die hierbei auftretenden Probleme erarbeitet. Anschließend wurden Untersuchungen zur Temperaturverteilung auf der Pulverbettoberfläche sowie zu den Einflussfaktoren auf deren Homogenität durchgeführt. In weiteren Untersuchungen wurden die Schmelzetemperaturen in Abhängigkeit verschiedener Prozessparameter ermittelt. Auf Basis dieser Messergebnisse wurden Aussagen über erforderliche Optimierungen getroffen und die Nutzbarkeit der Thermografie beim Laser-Sintern zur Prozessüberwachung, -regelung sowie zur Anlagenwartung als erster Zwischenstand der Untersuchungen bewertet.
Resumo:
When stereo images are captured under less than ideal conditions, there may be inconsistencies between the two images in brightness, contrast, blurring, etc. When stereo matching is performed between the images, these variations can greatly reduce the quality of the resulting depth map. In this paper we propose a method for correcting sharpness variations in stereo image pairs which is performed as a pre-processing step to stereo matching. Our method is based on scaling the 2D discrete cosine transform (DCT) coefficients of both images so that the two images have the same amount of energy in each of a set of frequency bands. Experiments show that applying the proposed correction method can greatly improve the disparity map quality when one image in a stereo pair is more blurred than the other.
Resumo:
A two-pronged approach for the automatic quantitation of multiple sclerosis (MS) lesions on magnetic resonance (MR) images has been developed. This method includes the design and use of a pulse sequence for improved lesion-to-tissue contrast (LTC) and seeks to identify and minimize the sources of false lesion classifications in segmented images. The new pulse sequence, referred to as AFFIRMATIVE (Attenuation of Fluid by Fast Inversion Recovery with MAgnetization Transfer Imaging with Variable Echoes), improves the LTC, relative to spin-echo images, by combining Fluid-Attenuated Inversion Recovery (FLAIR) and Magnetization Transfer Contrast (MTC). In addition to acquiring fast FLAIR/MTC images, the AFFIRMATIVE sequence simultaneously acquires fast spin-echo (FSE) images for spatial registration of images, which is necessary for accurate lesion quantitation. Flow has been found to be a primary source of false lesion classifications. Therefore, an imaging protocol and reconstruction methods are developed to generate "flow images" which depict both coherent (vascular) and incoherent (CSF) flow. An automatic technique is designed for the removal of extra-meningeal tissues, since these are known to be sources of false lesion classifications. A retrospective, three-dimensional (3D) registration algorithm is implemented to correct for patient movement which may have occurred between AFFIRMATIVE and flow imaging scans. Following application of these pre-processing steps, images are segmented into white matter, gray matter, cerebrospinal fluid, and MS lesions based on AFFIRMATIVE and flow images using an automatic algorithm. All algorithms are seamlessly integrated into a single MR image analysis software package. Lesion quantitation has been performed on images from 15 patient volunteers. The total processing time is less than two hours per patient on a SPARCstation 20. The automated nature of this approach should provide an objective means of monitoring the progression, stabilization, and/or regression of MS lesions in large-scale, multi-center clinical trials. ^
Resumo:
Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.
Resumo:
This paper presents the capabilities of a Space-Based Space Surveillance (SBSS) demonstration mission for Space Surveillance and Tracking (SST) based on a micro- satellite platform. The results have been produced in the frame of ESA’s "As sessment Study for Space Based Space Surveillance Demonstration Mission (Phase A) " performed by the Airbus DS consortium. Space Surveillance and Tracking is part of Space Situational Awareness (SSA) and covers the detection, tracking and cataloguing of spa ce debris and satellites. Derived SST services comprise a catalogue of these man-made objects, collision warning, detection and characterisation of in-orbit fragmentations, sub-catalogue debris characterisation, etc. The assessment of SBSS in an SST system architecture has shown that both an operational SBSS and also already a well - designed space-based demonstrator can provide substantial performance in terms of surveillance and tracking of beyond - LEO objects. Especially the early deployment of a demonstrator, possible by using standard equipment, could boost initial operating capability and create a self-maintained object catalogue. Unlike classical technology demonstration missions, the primary goal is the demonstration and optimisation of the functional elements in a complex end-to-end chain (mission planning, observation strategies, data acquisition, processing and fusion, etc.) until the final products can be offered to the users. The presented SBSS system concept takes the ESA SST System Requirements (derived within the ESA SSA Preparatory Program) into account and aims at fulfilling some of the SST core requirements in a stand-alone manner. The evaluation of the concept has shown that an according solution can be implemented with low technological effort and risk. The paper presents details of the system concept, candidate micro - satellite platforms, the observation strategy and the results of performance simulations for GEO coverage and cataloguing accuracy
Resumo:
This paper presents the capabilities of a Space-Based Space Surveillance (SBSS) demonstration mission for Space Surveillance and Tracking (SST) based on a micro-satellite platform. The results have been produced in the frame of ESA’s "Assessment Study for Space Based Space Surveillance Demonstration Mission" performed by the Airbus Defence and Space consortium. The assessment of SBSS in an SST system architecture has shown that both an operational SBSS and also already a well- designed space-based demonstrator can provide substantial performance in terms of surveillance and tracking of beyond-LEO objects. Especially the early deployment of a demonstrator, possible by using standard equipment, could boost initial operating capability and create a self-maintained object catalogue. Furthermore, unique statistical information about small-size LEO debris (mm size) can be collected in-situ. Unlike classical technology demonstration missions, the primary goal is the demonstration and optimisation of the functional elements in a complex end-to-end chain (mission planning, observation strategies, data acquisition, processing, etc.) until the final products can be offered to the users and with low technological effort and risk. The SBSS system concept takes the ESA SST System Requirements into account and aims at fulfilling SST core requirements in a stand-alone manner. Additionally, requirements for detection and characterisation of small-sizedLEO debris are considered. The paper presents details of the system concept, candidate micro-satellite platforms, the instrument design and the operational modes. Note that the detailed results of performance simulations for space debris coverage and cataloguing accuracy are presented in a separate paper “Capability of a Space-based Space Surveillance System to Detect and Track Objects in GEO, MEO and LEO Orbits” by J. Silha (AIUB) et al., IAC-14, A6, 1.1x25640.
Resumo:
In the last decades, a striking amount of hydrographic data, covering the most part of Mediterranean basin, have been generated by the efforts made to characterize the oceanography and ecology of the basin. On the other side, the improvement in technologies, and the consequent perfecting of sampling and analytical techniques, provided data even more reliable than in the past. Nutrient data enter fully in this context, but suffer of the fact of having been produced by a large number of uncoordinated research programs and of being often deficient in quality control, with data bases lacking of intercalibration. In this study we present a computational procedure based on robust statistical parameters and on the physical dynamic properties of the Mediterranean sea and its morphological characteristics, to partially overcome the above limits in the existing data sets. Through a data pre filtering based on the outlier analysis, and thanks to the subsequent shape analysis, the procedure identifies the inconsistent data and for each basin area identifies a characteristic set of shapes (vertical profiles). Rejecting all the profiles that do not follow any of the spotted shapes, the procedure identifies all the reliable profiles and allows us to obtain a data set that can be considered more internally consistent than the existing ones.
Resumo:
El presente trabajo describe una nueva metodología para la detección automática del espacio glotal de imágenes laríngeas tomadas a partir de 15 vídeos grabados por el servicio ORL del hospital Gregorio Marañón de Madrid con luz estroboscópica. El sistema desarrollado está basado en el modelo de contornos activos (snake). El algoritmo combina en el pre-procesado, algunas técnicas tradicionales (umbralización y filtro de mediana) con técnicas más sofisticadas tales como filtrado anisotrópico. De esta forma, se obtiene una imagen apropiada para el uso de las snakes. El valor escogido para el umbral es del 85% del pico máximo del histograma de la imagen; sobre este valor la información de los píxeles no es relevante. El filtro anisotrópico permite distinguir dos niveles de intensidad, uno es el fondo y el otro es la glotis. La inicialización se basa en obtener el módulo del campo GVF; de esta manera se asegura un proceso automático para la selección del contorno inicial. El rendimiento del algoritmo se valida usando los coeficientes de Pratt y se compara contra una segmentación realizada manualmente y otro método automático basado en la transformada de watershed. SUMMARY: The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images taken from 15 videos recorded by the ENT service of the Gregorio Marañon Hospital in Madrid with videostroboscopic equipment. The system is based on active contour models (snakes). The algorithm combines for the pre-processing, some traditional techniques (thresholding and median filter) with more sophisticated techniques such as anisotropic filtering. In this way, we obtain an appropriate image for the use of snake. The value selected for the threshold is 85% of the maximum peak of the image histogram; over this point the information of the pixels is not relevant. The anisotropic filter permits to distinguish two intensity levels, one is the background and the other one is the glottis. The initialization is based on the obtained magnitude by GVF field; in this manner an automatic process for the initial contour selection will be assured. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation and another automatic method based on the watershed transformation.
Resumo:
Different procedures for monitoring the evolution of leafy vegetables, under plastic covers during cold storage, have been studied. Fifteen spinach leaves were put inside Petri dishes covered with three different plastic films and stored at 4 °C for 21 days. Hyperspectral images were taken during this storage. A radiometric correction is proposed in order to avoid the variation in transmittance of the plastic films during time in the hyperspectral images. Afterwards, three spectral pre-processing procedures (no pre-process, Savitsky–Golay and Standard Normal Variate, combined with Principal Component Analysis) were applied to obtain different models. The corresponding artificial images of scores were studied by means of Analysis of Variance to compare their ability to sense the aging of the leaves. All models were able to monitor the aging through storage. Radiometric correction seemed to work properly and could allow the supervision of shelf-life in leafy vegetables through commercial transparent films.
Resumo:
We address a cognitive radio scenario, where a number of secondary users performs identification of which primary user, if any, is trans- mitting, in a distributed way and using limited location information. We propose two fully distributed algorithms: the first is a direct iden- tification scheme, and in the other a distributed sub-optimal detection based on a simplified Neyman-Pearson energy detector precedes the identification scheme. Both algorithms are studied analytically in a realistic transmission scenario, and the advantage obtained by detec- tion pre-processing is also verified via simulation. Finally, we give details of their fully distributed implementation via consensus aver- aging algorithms.
Resumo:
Thermorheological changes in high hydrostatic pressure (HHP)-treated chickpea flour (CF) slurries were studied as a function of pressure level (0.1, 150, 300, 400, and 600 MPa) and slurry concentration (1:5, 1:4, 1:3, and 1:2 flour-to-water ratios). HHP-treated slurries were subsequently analyzed for changes in properties produced by heating, under both isothermal and non-isothermal processes. Elasticity (G′) of pressurized slurry increased with pressure applied and concentration. Conversely, heat-induced CF paste gradually transformed from solid-like behavior to liquid-like behavior as a function of moisture content and pressure level. The G′ and enthalpy of the CF paste decreased with increasing pressure level in proportion with the extent of HHP-induced starch gelatinization. At 25 °C and 15 min, HHP treatment at 450 and 600 MPa was sufficient to complete gelatinization of CF slurry at the lowest concentration (1:5), while more concentrated slurries would require higher pressures and temperature during treatment or longer holding times. Industrial relevance Demand for chickpea gel has increased considerably in the health and food industries because of its many beneficial effects. However, its use is affected by its very difficult handling. Judicious application of high hydrostatic pressure (HHP) at appropriate levels, adopted as a pre-processing instrument in combination with heating processes, is presented as an innovative technology to produce a remarkable decrease in thermo-hardening of heat-induced chickpea flour paste, permitting the development of new chickpea-based products with desirable handling properties and sensory attributes.