952 resultados para Post processing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In questa tesi, sono esposti i sistemi di navigazione che si sono evoluti, parimenti con il progresso scientifico e tecnologico, dalle prime misurazioni della Terra, per opera della civiltà ellenica, circa 2.500 anni fa, sino ai moderni sistemi satellitari e ai mai tramontati sistemi di radionavigazione. I sistemi di navigazione devono rispondere alla sempre maggiore richiesta di precisione, affidabilità, continuità e globalità del servizio, della società moderna. È sufficiente pensare che, attualmente, il solo traffico aereo civile fa volare 5 miliardi di passeggeri ogni anno, in oltre 60 milioni di voli e con un trasporto cargo di 85 milioni di tonnellate (ACI - World Airports Council International, 2012). La quota di traffico marittimo mondiale delle merci, è stata pari a circa 650 milioni di TEU (twenty-foot equivalent unit - misura standard di volume nel trasporto dei container ISO, corrisponde a circa 40 metri cubi totali), nel solo anno 2013 (IAPH - International Association of Ports and Harbors, 2013). Questi pochi, quanto significativi numeri, indicano una evidente necessità di “guidare” questo enorme flusso di aerei e navi in giro per il mondo, sempre in crescita, nella maniera più opportuna, tracciando le rotte adeguate e garantendo la sicurezza necessaria anche nelle fasi più delicate (decollo e atterraggio per gli aeroplani e manovre in porto per le grandi navi). Nello sviluppo della tesi si proverà a capire quali e quanto i sistemi di navigazione possono assolvere al ruolo di “guida” del trasporto aereo e marittimo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il presente lavoro di tesi si pone come obbiettivo l’elaborazione di dati GNSS in modalità cinematica post-processing per il monitoraggio strutturale e, in una seconda fase, lo studio delle precisioni raggiungibili delle soluzioni ottenute utilizzando algoritmi di post-elaborazione del dato. L’oggetto di studio è la torre Garisenda, situata in piazza Ravegnana, accanto alla torre Asinelli, nel centro storico di Bologna, da tempo oggetto di studi e monitoraggi per via della sua inclinazione particolarmente critica. Per lo studio è stato utilizzato un data set di quindici giorni, dal 15/12/2013 al 29/12/2013 compresi. Per l’elaborazione dei dati è stato utilizzato un software open source realizzato da ricercatori del Politecnico di Milano, goGPS. Quest'ultimo, essendo un codice nuovo, è stato necessario testarlo al fine di poter ottenere dei risultati validi. Nella prima fase della tesi si è quindi affrontato l’aspetto della calibrazione dei parametri che forniscono le soluzioni più precise per le finalità di monitoraggio considerando le possibili scelte offerte dal codice goGPS. In particolare sono stati imposti dei movimenti calibrati e si è osservata la soluzione al variare dei parametri selezionati scegliendo poi quella migliore, ossia il miglior compromesso tra la capacità di individuare i movimenti e il rumore della serie. Nella seconda fase, allo scopo di poter migliorare le precisioni delle soluzioni si sono valutati metodi di correzione delle soluzioni basati sull'uso di filtri sequenziali e sono state condotte analisi sull'incremento di precisione derivante dall'applicazione di tali correzioni.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Statistical shape models (SSMs) have been used widely as a basis for segmenting and interpreting complex anatomical structures. The robustness of these models are sensitive to the registration procedures, i.e., establishment of a dense correspondence across a training data set. In this work, two SSMs based on the same training data set of scoliotic vertebrae, and registration procedures were compared. The first model was constructed based on the original binary masks without applying any image pre- and post-processing, and the second was obtained by means of a feature preserving smoothing method applied to the original training data set, followed by a standard rasterization algorithm. The accuracies of the correspondences were assessed quantitatively by means of the maximum of the mean minimum distance (MMMD) and Hausdorf distance (H(D)). Anatomical validity of the models were quantified by means of three different criteria, i.e., compactness, specificity, and model generalization ability. The objective of this study was to compare quasi-identical models based on standard metrics. Preliminary results suggest that the MMMD distance and eigenvalues are not sensitive metrics for evaluating the performance and robustness of SSMs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Forward-looking ground penetrating radar shows promise for detection of improvised explosive devices in active war zones. Because of certain insurmountable physical limitations, post-processing algorithm development is the most popular research topic in this field. One such investigative avenue explores the worthiness of frequency analysis during data post-processing. Using the finite difference time domain numerical method, simulations are run to test both mine and clutter frequency response. Mines are found to respond strongest at low frequencies and cause periodic changes in ground penetrating radar frequency results. These results are called into question, however, when clutter, a phenomenon generally known to be random, is also found to cause periodic frequency effects. Possible causes, including simulation inaccuracy, are considered. Although the clutter models used are found to be inadequately random, specular reflections of differing periodicity are found to return from both the mine and the ground. The presence of these specular reflections offers a potential alternative method of determining a mine’s presence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ultrasmall superparamagnetic iron oxide (USPIO) particles are promising contrast media, especially for molecular and cellular imaging besides lymph node staging owing to their superior NMR efficacy, macrophage uptake and lymphotropic properties. The goal of the present prospective clinical work was to validate quantification of signal decrease on high-resolution T(2)-weighted MR sequences before and 24-36 h after USPIO administration for accurate differentiation between benign and malignant normal-sized pelvic lymph nodes. Fifty-eight patients with bladder or prostate cancer were examined on a 3 T MR unit and their respective lymph node signal intensities (SI), signal-to-noise (SNR) and contrast-to-noise (CNR) were determined on pre- and post-contrast 3D T(2)-weighted turbo spin echo (TSE) images. Based on histology and/or localization, USPIO-uptake-related SI/SNR decrease of benign vs malignant and pelvic vs inguinal lymph nodes was compared. Out of 2182 resected lymph nodes 366 were selected for MRI post-processing. Benign pelvic lymph nodes showed a significantly higher SI/SNR decrease compared with malignant nodes (p < 0.0001). Inguinal lymph nodes in comparison to pelvic lymph nodes presented a reduced SI/SNR decrease (p < 0.0001). CNR did not differ significantly between benign and malignant lymph nodes. The receiver operating curve analysis yielded an area under the curve of 0.96, and the point with optimal accuracy was found at a threshold value of 13.5% SNR decrease. Overlap of SI and SNR changes between benign and malignant lymph nodes were attributed to partial voluming, lipomatosis, histiocytosis or focal lymphoreticular hyperplasia. USPIO-enhanced MRI improves the diagnostic ability of lymph node staging in normal-sized lymph nodes, although some overlap of SI/SNR-changes remained. Quantification of USPIO-dependent SNR decrease will enable the validation of this promising technique with the final goal of improving and individualizing patient care.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: To prospectively evaluate whether intravenous morphine co-medication improves bile duct visualization of dual-energy CT-cholangiography. MATERIALS AND METHODS: Forty potential donors for living-related liver transplantation underwent CT-cholangiography with infusion of a hepatobiliary contrast agent over 40min. Twenty minutes after the beginning of the contrast agent infusion, either normal saline (n=20 patients; control group [CG]) or morphine sulfate (n=20 patients; morphine group [MG]) was injected. Forty-five minutes after initiation of the contrast agent, a dual-energy CT acquisition of the liver was performed. Applying dual-energy post-processing, pure iodine images were generated. Primary study goals were determination of bile duct diameters and visualization scores (on a scale of 0 to 3: 0-not visualized; 3-excellent visualization). RESULTS: Bile duct visualization scores for second-order and third-order branch ducts were significantly higher in the MG compared to the CG (2.9±0.1 versus 2.6±0.2 [P<0.001] and 2.7±0.3 versus 2.1±0.6 [P<0.01], respectively). Bile duct diameters for the common duct and main ducts were significantly higher in the MG compared to the CG (5.9±1.3mm versus 4.9±1.3mm [P<0.05] and 3.7±1.3mm versus 2.6±0.5mm [P<0.01], respectively). CONCLUSION: Intravenous morphine co-medication significantly improved biliary visualization on dual-energy CT-cholangiography in potential donors for living-related liver transplantation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this prospective trial was to evaluate sensitivity and specificity of bright lumen magnetic resonance colonography (MRC) in comparison with conventional colonoscopy (CC). A total of 120 consecutive patients with clinical indications for CC were prospectively examined using MRC (1.5 Tesla) which was then followed by CC. Prior to MRC, the cleansed colon was filled with a gadolinium-water solution. A 3D GRE sequence was performed with the patient in the prone and supine position, each acquired during one breathhold period. After division of the colon into five segments, interactive data analysis was carried out using three-dimensional post-processing, including a virtual intraluminal view. The results of CC served as a reference standard. In all patients MRC was performed successfully and no complications occurred. Image quality was diagnostic in 92% (574/620 colonic segments). On a per-patient basis, the results of MRC were as follows: sensitivity 84% (95% CI 71.7-92.3%), specificity 97% (95% CI 89.0-99.6%). Five flat adenomas and 6/16 small polyps (< or =5 mm) were not identified by MRC. MRC offers high sensitivity and excellent specificity rates in patients with clinical indications for CC. Improved MRC techniques are needed to detect small polyps and flat adenomas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study focuses on a specific engine, i.e., a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). This conventional turbofan engine has been modified to include a secondary isobaric burner, i.e., ITB, in a transition duct between the high-pressure turbine and the low-pressure turbine. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. In parametric analysis, the modified engine performance parameters are evaluated and compared with baseline engine in terms of design limitation (maximum turbine inlet temperature), flight conditions (such as flight Mach condition, ambient temperature and pressure), and design choices (such as compressor pressure ratio, fan pressure ratio, fan bypass ratio etc.). A turbine cooling model is also included to account for the effect of cooling air on engine performance. The results from the on-design analysis confirmed the advantage of using ITB, i.e., higher specific thrust with small increases in thrust specific fuel consumption, less cooling air, and less NOx production, provided that the main burner exit temperature and ITB exit temperature are properly specified. It is also important to identify the critical ITB temperature, beyond which the ITB is turned off and has no advantage at all. With the encouraging results from parametric cycle analysis, a detailed performance cycle analysis of the identical engine is also conducted for steady-stateengine performance prediction. The results from off-design cycle analysis show that the ITB engine at full throttle setting has enhanced performance over baseline engine. Furthermore, ITB engine operating at partial throttle settings will exhibit higher thrust at lower specific fuel consumption and improved thermal efficiency over the baseline engine. A mission analysis is also presented to predict the fuel consumptions in certain mission phases. Excel macrocode, Visual Basic for Application, and Excel neuron cells are combined to facilitate Excel software to perform these cycle analyses. These user-friendly programs compute and plot the data sequentially without forcing users to open other types of post-processing programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The primary goal of this project is to demonstrate the practical use of data mining algorithms to cluster a solved steady-state computational fluids simulation (CFD) flow domain into a simplified lumped-parameter network. A commercial-quality code, “cfdMine” was created using a volume-weighted k-means clustering that that can accomplish the clustering of a 20 million cell CFD domain on a single CPU in several hours or less. Additionally agglomeration and k-means Mahalanobis were added as optional post-processing steps to further enhance the separation of the clusters. The resultant nodal network is considered a reduced-order model and can be solved transiently at a very minimal computational cost. The reduced order network is then instantiated in the commercial thermal solver MuSES to perform transient conjugate heat transfer using convection predicted using a lumped network (based on steady-state CFD). When inserting the lumped nodal network into a MuSES model, the potential for developing a “localized heat transfer coefficient” is shown to be an improvement over existing techniques. Also, it was found that the use of the clustering created a new flow visualization technique. Finally, fixing clusters near equipment newly demonstrates a capability to track temperatures near specific objects (such as equipment in vehicles).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Non-invasive imaging methods are increasingly entering the field of forensic medicine. Facing the intricacies of classical neck dissection techniques, postmortem imaging might provide new diagnostic possibilities which could also improve forensic reconstruction. The aim of this study was to determine the value of postmortem neck imaging in comparison to forensic autopsy regarding the evaluation of the cause of death and the analysis of biomechanical aspects of neck trauma. For this purpose, 5 deceased persons (1 female and 4 male, mean age 49.8 years, range 20-80 years) who had suffered odontoid fractures or atlantoaxial distractions with or without medullary injuries, were studied using multislice computed tomography (MSCT), magnetic resonance imaging (MRI) and subsequent forensic autopsy. Evaluation of the findings was performed by radiologists, forensic pathologists and neuropathologists. The cause of death could be established radiologically in three of the five cases. MRI data were insufficient due to metal artefacts in one case, and in another, ascending medullary edema as the cause of delayed death was only detected by histological analysis. Regarding forensic reconstruction, the imaging methods were superior to autopsy neck exploration in all cases due to the post-processing possibilities of viewing the imaging data. In living patients who suffer medullary injury, follow-up MRI should be considered to exclude ascending medullary edema.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Magnetic resonance imaging, with its exquisite soft tissue contrast, is an ideal modality for investigating spinal cord pathology. While conventional MRI techniques are very sensitive for spinal cord pathology, their specificity is somewhat limited. Diffusion MRI is an advanced technique which is a very sensitive and specific indicator of the integrity of white matter tracts. Diffusion imaging has been shown to detect early ischemic changes in white matter, while conventional imaging demonstrates no change. By acquiring the complete apparent diffusion tensor (ADT), tissue diffusion properties can be expressed in terms of quantitative and rotationally invariant parameters. ^ Systematic study of SCI in vivo requires controlled animal models such as the popular rat model. To date, studies of spinal cord using ADT imaging have been performed exclusively in fixed, excised spinal cords, introducing inevitable artifacts and losing the benefits of MRI's noninvasive nature. In vivo imaging reflects the actual in vivo tissue properties, and allows each animal to be imaged at multiple time points, greatly reducing the number of animals required to achieve statistical significance. Because the spinal cord is very small, the available signal-to-noise ratio (SNR) is very low. Prior spin-echo based ADT studies of rat spinal cord have relied on high magnetic field strengths and long imaging times—on the order of 10 hours—for adequate SNR. Such long imaging times are incompatible with in vivo imaging, and are not relevant for imaging the early phases following SCI. Echo planar imaging (EPI) is one of the fastest imaging methods, and is popular for diffusion imaging. However, EPI further lowers the image SNR, and is very sensitive to small imperfections in the magnetic field, such as those introduced by the bony spine. Additionally, The small field-of-view (FOV) needed for spinal cord imaging requires large imaging gradients which generate EPI artifacts. The addition of diffusion gradients introduces yet further artifacts. ^ This work develops a method for rapid EPI-based in vivo diffusion imaging of rat spinal cord. The method involves improving the SNR using an implantable coil; reducing magnetic field inhomogeneities by means of an autoshim, and correcting EPI artifacts by post-processing. New EPI artifacts due to diffusion gradients described, and post-processing correction techniques are developed. ^ These techniques were used to obtain rotationally invariant diffusion parameters from 9 animals in vivo, and were validated using the gold-standard, but slow, spinecho based diffusion sequence. These are the first reported measurements of the ADT in spinal cord in vivo . ^ Many of the techniques described are equally applicable toward imaging of human spinal cord. We anticipate that these techniques will aid in evaluating and optimizing potential therapies, and will lead to improved patient care. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 1999, all student teachers at secondary I level at the University of Bern who had to undertake an internship were asked to participate in a study on learning processes during practicum: 150 students and their mentors in three types of practicum participated—introductory practicum (after the first half‐year of studies), intermediate practicum (after two years of studies) and final practicum (after three years of studies). At the end of the practicum, student teachers and mentors completed questionnaires on preparing, teaching and postprocessing lessons. All student teachers, additionally, rated their professional skills and aspects of personality (attitudes towards pupils, self‐assuredness and well‐being) before and after the practicum. Forty‐six student teachers wrote daily semi‐structured diaries about essential learning situations during their practicum. Results indicate that in each practicum students improved significantly in preparing, conducting and postprocessing lessons. The mentors rated these changes as being greater than did the student teachers. From the perspective of the student teachers their general teaching skills also improved, and their attitudes toward pupils became more open. Furthermore, during practicum their self‐esteem and subjective well‐being increased. Diary data confirmed that there are no differences between different levels of practicum in terms of learning outcomes, but give some first insight into different ways of learning during internship.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In clinical practice, traditional X-ray radiography is widely used, and knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic approach for landmark detection and shape segmentation of both pelvis and femur in conventional AP X-ray images. Our approach is based on the framework of landmark detection via Random Forest (RF) regression and shape regularization via hierarchical sparse shape composition. We propose a visual feature FL-HoG (Flexible- Level Histogram of Oriented Gradients) and a feature selection algorithm based on trace radio optimization to improve the robustness and the efficacy of RF-based landmark detection. The landmark detection result is then used in a hierarchical sparse shape composition framework for shape regularization. Finally, the extracted shape contour is fine-tuned by a post-processing step based on low level image features. The experimental results demonstrate that our feature selection algorithm reduces the feature dimension in a factor of 40 and improves both training and test efficiency. Further experiments conducted on 436 clinical AP pelvis X-rays show that our approach achieves an average point-to-curve error around 1.2 mm for femur and 1.9 mm for pelvis.