930 resultados para post-processing
Resumo:
This study focuses on a specific engine, i.e., a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). This conventional turbofan engine has been modified to include a secondary isobaric burner, i.e., ITB, in a transition duct between the high-pressure turbine and the low-pressure turbine. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. In parametric analysis, the modified engine performance parameters are evaluated and compared with baseline engine in terms of design limitation (maximum turbine inlet temperature), flight conditions (such as flight Mach condition, ambient temperature and pressure), and design choices (such as compressor pressure ratio, fan pressure ratio, fan bypass ratio etc.). A turbine cooling model is also included to account for the effect of cooling air on engine performance. The results from the on-design analysis confirmed the advantage of using ITB, i.e., higher specific thrust with small increases in thrust specific fuel consumption, less cooling air, and less NOx production, provided that the main burner exit temperature and ITB exit temperature are properly specified. It is also important to identify the critical ITB temperature, beyond which the ITB is turned off and has no advantage at all. With the encouraging results from parametric cycle analysis, a detailed performance cycle analysis of the identical engine is also conducted for steady-stateengine performance prediction. The results from off-design cycle analysis show that the ITB engine at full throttle setting has enhanced performance over baseline engine. Furthermore, ITB engine operating at partial throttle settings will exhibit higher thrust at lower specific fuel consumption and improved thermal efficiency over the baseline engine. A mission analysis is also presented to predict the fuel consumptions in certain mission phases. Excel macrocode, Visual Basic for Application, and Excel neuron cells are combined to facilitate Excel software to perform these cycle analyses. These user-friendly programs compute and plot the data sequentially without forcing users to open other types of post-processing programs.
Resumo:
The primary goal of this project is to demonstrate the practical use of data mining algorithms to cluster a solved steady-state computational fluids simulation (CFD) flow domain into a simplified lumped-parameter network. A commercial-quality code, “cfdMine” was created using a volume-weighted k-means clustering that that can accomplish the clustering of a 20 million cell CFD domain on a single CPU in several hours or less. Additionally agglomeration and k-means Mahalanobis were added as optional post-processing steps to further enhance the separation of the clusters. The resultant nodal network is considered a reduced-order model and can be solved transiently at a very minimal computational cost. The reduced order network is then instantiated in the commercial thermal solver MuSES to perform transient conjugate heat transfer using convection predicted using a lumped network (based on steady-state CFD). When inserting the lumped nodal network into a MuSES model, the potential for developing a “localized heat transfer coefficient” is shown to be an improvement over existing techniques. Also, it was found that the use of the clustering created a new flow visualization technique. Finally, fixing clusters near equipment newly demonstrates a capability to track temperatures near specific objects (such as equipment in vehicles).
Resumo:
Non-invasive imaging methods are increasingly entering the field of forensic medicine. Facing the intricacies of classical neck dissection techniques, postmortem imaging might provide new diagnostic possibilities which could also improve forensic reconstruction. The aim of this study was to determine the value of postmortem neck imaging in comparison to forensic autopsy regarding the evaluation of the cause of death and the analysis of biomechanical aspects of neck trauma. For this purpose, 5 deceased persons (1 female and 4 male, mean age 49.8 years, range 20-80 years) who had suffered odontoid fractures or atlantoaxial distractions with or without medullary injuries, were studied using multislice computed tomography (MSCT), magnetic resonance imaging (MRI) and subsequent forensic autopsy. Evaluation of the findings was performed by radiologists, forensic pathologists and neuropathologists. The cause of death could be established radiologically in three of the five cases. MRI data were insufficient due to metal artefacts in one case, and in another, ascending medullary edema as the cause of delayed death was only detected by histological analysis. Regarding forensic reconstruction, the imaging methods were superior to autopsy neck exploration in all cases due to the post-processing possibilities of viewing the imaging data. In living patients who suffer medullary injury, follow-up MRI should be considered to exclude ascending medullary edema.
Resumo:
Magnetic resonance imaging, with its exquisite soft tissue contrast, is an ideal modality for investigating spinal cord pathology. While conventional MRI techniques are very sensitive for spinal cord pathology, their specificity is somewhat limited. Diffusion MRI is an advanced technique which is a very sensitive and specific indicator of the integrity of white matter tracts. Diffusion imaging has been shown to detect early ischemic changes in white matter, while conventional imaging demonstrates no change. By acquiring the complete apparent diffusion tensor (ADT), tissue diffusion properties can be expressed in terms of quantitative and rotationally invariant parameters. ^ Systematic study of SCI in vivo requires controlled animal models such as the popular rat model. To date, studies of spinal cord using ADT imaging have been performed exclusively in fixed, excised spinal cords, introducing inevitable artifacts and losing the benefits of MRI's noninvasive nature. In vivo imaging reflects the actual in vivo tissue properties, and allows each animal to be imaged at multiple time points, greatly reducing the number of animals required to achieve statistical significance. Because the spinal cord is very small, the available signal-to-noise ratio (SNR) is very low. Prior spin-echo based ADT studies of rat spinal cord have relied on high magnetic field strengths and long imaging times—on the order of 10 hours—for adequate SNR. Such long imaging times are incompatible with in vivo imaging, and are not relevant for imaging the early phases following SCI. Echo planar imaging (EPI) is one of the fastest imaging methods, and is popular for diffusion imaging. However, EPI further lowers the image SNR, and is very sensitive to small imperfections in the magnetic field, such as those introduced by the bony spine. Additionally, The small field-of-view (FOV) needed for spinal cord imaging requires large imaging gradients which generate EPI artifacts. The addition of diffusion gradients introduces yet further artifacts. ^ This work develops a method for rapid EPI-based in vivo diffusion imaging of rat spinal cord. The method involves improving the SNR using an implantable coil; reducing magnetic field inhomogeneities by means of an autoshim, and correcting EPI artifacts by post-processing. New EPI artifacts due to diffusion gradients described, and post-processing correction techniques are developed. ^ These techniques were used to obtain rotationally invariant diffusion parameters from 9 animals in vivo, and were validated using the gold-standard, but slow, spinecho based diffusion sequence. These are the first reported measurements of the ADT in spinal cord in vivo . ^ Many of the techniques described are equally applicable toward imaging of human spinal cord. We anticipate that these techniques will aid in evaluating and optimizing potential therapies, and will lead to improved patient care. ^
Resumo:
In 1999, all student teachers at secondary I level at the University of Bern who had to undertake an internship were asked to participate in a study on learning processes during practicum: 150 students and their mentors in three types of practicum participated—introductory practicum (after the first half‐year of studies), intermediate practicum (after two years of studies) and final practicum (after three years of studies). At the end of the practicum, student teachers and mentors completed questionnaires on preparing, teaching and post‐processing lessons. All student teachers, additionally, rated their professional skills and aspects of personality (attitudes towards pupils, self‐assuredness and well‐being) before and after the practicum. Forty‐six student teachers wrote daily semi‐structured diaries about essential learning situations during their practicum. Results indicate that in each practicum students improved significantly in preparing, conducting and post‐processing lessons. The mentors rated these changes as being greater than did the student teachers. From the perspective of the student teachers their general teaching skills also improved, and their attitudes toward pupils became more open. Furthermore, during practicum their self‐esteem and subjective well‐being increased. Diary data confirmed that there are no differences between different levels of practicum in terms of learning outcomes, but give some first insight into different ways of learning during internship.
Resumo:
The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.
Resumo:
In clinical practice, traditional X-ray radiography is widely used, and knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic approach for landmark detection and shape segmentation of both pelvis and femur in conventional AP X-ray images. Our approach is based on the framework of landmark detection via Random Forest (RF) regression and shape regularization via hierarchical sparse shape composition. We propose a visual feature FL-HoG (Flexible- Level Histogram of Oriented Gradients) and a feature selection algorithm based on trace radio optimization to improve the robustness and the efficacy of RF-based landmark detection. The landmark detection result is then used in a hierarchical sparse shape composition framework for shape regularization. Finally, the extracted shape contour is fine-tuned by a post-processing step based on low level image features. The experimental results demonstrate that our feature selection algorithm reduces the feature dimension in a factor of 40 and improves both training and test efficiency. Further experiments conducted on 436 clinical AP pelvis X-rays show that our approach achieves an average point-to-curve error around 1.2 mm for femur and 1.9 mm for pelvis.
Resumo:
Stray light contamination reduces considerably the precision of photometric of faint stars for low altitude spaceborne observatories. When measuring faint objects, the necessity of coping with stray light contamination arises in order to avoid systematic impacts on low signal-to-noise images. Stray light contamination can be represented by a flat offset in CCD data. Mitigation techniques begin by a comprehensive study during the design phase, followed by the use of target pointing optimisation and post-processing methods. We present a code that aims at simulating the stray-light contamination in low-Earth orbit coming from reflexion of solar light by the Earth. StrAy Light SimulAtor (SALSA) is a tool intended to be used at an early stage as a tool to evaluate the effective visible region in the sky and, therefore to optimise the observation sequence. SALSA can compute Earth stray light contamination for significant periods of time allowing missionwide parameters to be optimised (e.g. impose constraints on the point source transmission function (PST) and/or on the altitude of the satellite). It can also be used to study the behaviour of the stray light at different seasons or latitudes. Given the position of the satellite with respect to the Earth and the Sun, SALSA computes the stray light at the entrance of the telescope following a geometrical technique. After characterising the illuminated region of the Earth, the portion of illuminated Earth that affects the satellite is calculated. Then, the flux of reflected solar photons is evaluated at the entrance of the telescope. Using the PST of the instrument, the final stray light contamination at the detector is calculated. The analysis tools include time series analysis of the contamination, evaluation of the sky coverage and an objects visibility predictor. Effects of the South Atlantic Anomaly and of any shutdown periods of the instrument can be added. Several designs or mission concepts can be easily tested and compared. The code is not thought as a stand-alone mission designer. Its mandatory inputs are a time series describing the trajectory of the satellite and the characteristics of the instrument. This software suite has been applied to the design and analysis of CHEOPS (CHaracterizing ExOPlanet Satellite). This mission requires very high precision photometry to detect very shallow transits of exoplanets. Different altitudes and characteristics of the detector have been studied in order to find the best parameters, that reduce the effect of contamination. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA’s Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), led by the Astronomical Institute of the University of Bern (AIUB), addresses this problem. The goal of the project is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). The In-Orbit Tumbling Analysis tool (ιOTA) is a prototype software, currently in development by Hyperschall Technologie Göttingen GmbH (HTG) within the framework of the project. ιOTA will be a highly modular software tool to perform short-(days), medium-(months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour, magnetic torquer activity and thruster firing. The purpose of ιOTA is to provide high accuracy short-term simulations to support observers and potential ADR missions, as well as medium-and long-term simulations to study the significance of the particular internal and external influences on the attitude, especially damping factors and momentum transfer. The simulation will also enable the investigation of the altitude dependency of the particular external influences. ιOTA's post-processing modules will generate synthetic measurements for observers and for software validation. The validation of the software will be done by cross-calibration with observations and measurements acquired by the project partners.
Resumo:
OBJECTIVES
To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.
METHODS
Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.
RESULTS
There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.79
Resumo:
We re-evaluate the Greenland mass balance for the recent period using low-pass Independent Component Analysis (ICA) post-processing of the Level-2 GRACE data (2002-2010) from different official providers (UTCSR, JPL, GFZ) and confirm the present important ice mass loss in the range of -70 and -90 Gt/y of this ice sheet, due to negative contributions of the glaciers on the east coast. We highlight the high interannual variability of mass variations of the Greenland Ice Sheet (GrIS), especially the recent deceleration of ice loss in 2009-2010, once seasonal cycles are robustly removed by Seasonal Trend Loess (STL) decomposition. Interannual variability leads to varying trend estimates depending on the considered time span. Correction of post-glacial rebound effects on ice mass trend estimates represents no more than 8 Gt/y over the whole ice sheet. We also investigate possible climatic causes that can explain these ice mass interannual variations, as strong correlations between GRACE-based mass balance and atmosphere/ocean parallels are established: (1) changes in snow accumulation, and (2) the influence of inputs of warm ocean water that periodically accelerate the calving of glaciers in coastal regions and, feed-back effects of coastal water cooling by fresh currents from glaciers melting. These results suggest that the Greenland mass balance is driven by coastal sea surface temperature at time scales shorter than accumulation.
Resumo:
High-latitude ecosystems play an important role in the global carbon cycle and in regulating the climate system and are presently undergoing rapid environmental change. Accurate land cover data sets are required to both document these changes as well as to provide land-surface information for benchmarking and initializing Earth system models. Earth system models also require specific land cover classification systems based on plant functional types (PFTs), rather than species or ecosystems, and so post-processing of existing land cover data is often required. This study compares over Siberia, multiple land cover data sets against one another and with auxiliary data to identify key uncertainties that contribute to variability in PFT classifications that would introduce errors in Earth system modeling. Land cover classification systems from GLC 2000, GlobCover 2005 and 2009, and MODIS collections 5 and 5.1 are first aggregated to a common legend, and then compared to high-resolution land cover classification systems, vegetation continuous fields (MODIS VCFs) and satellite-derived tree heights (to discriminate against sparse, shrub, and forest vegetation). The GlobCover data set, with a lower threshold for tree cover and taller tree heights and a better spatial resolution, tends to have better distributions of tree cover compared to high-resolution data. It has therefore been chosen to build new PFT maps for the ORCHIDEE land surface model at 1 km scale. Compared to the original PFT data set, the new PFT maps based on GlobCover 2005 and an updated cross-walking approach mainly differ in the characterization of forests and degree of tree cover. The partition of grasslands and bare soils now appears more realistic compared with ground truth data. This new vegetation map provides a framework for further development of new PFTs in the ORCHIDEE model like shrubs, lichens and mosses, to represent the water and carbon cycles in northern latitudes better. Updated land cover data sets are critical for improving and maintaining the relevance of Earth system models for assessing climate and human impacts on biogeochemistry and biophysics.
Resumo:
Detailed data on land use and land cover constitute important information for Earth system models, environmental monitoring and ecosystem services research. Global land cover products are evolving rapidly; however, there is still a lack of information particularly for heterogeneous agricultural landscapes. We censused land use and land cover field by field in the agricultural mosaic catchment Haean in South Korea. We recorded the land cover types with additional information on agricultural practice. In this paper we introduce the data, their collection and the post-processing protocol. Furthermore, because it is important to quantitatively evaluate available land use and land cover products, we compared our data with the MODIS Land Cover Type product (MCD12Q1). During the studied period, a large portion of dry fields was converted to perennial crops. Compared to our data, the forested area was underrepresented and the agricultural area overrepresented in MCD12Q1. In addition, linear landscape elements such as waterbodies were missing in the MODIS product due to its coarse spatial resolution. The data presented here can be useful for earth science and ecosystem services research.
Bathymetric map of Heron Reef, Australia, derived from airborne hyperspectral data at 1 m resolution
Resumo:
A simple method for efficient inversion of arbitrary radiative transfer models for image analysis is presented. The method operates by representing the shape of the function that maps model parameters to spectral reflectance by an adaptive look-up tree (ALUT) that evenly distributes the discretization error of tabulated reflectances in spectral space. A post-processing step organizes the data into a binary space partitioning tree that facilitates an efficient inversion search algorithm. In an example shallow water remote sensing application, the method performs faster than an implementation of previously published methodology and has the same accuracy in bathymetric retrievals. The method has no user configuration parameters requiring expert knowledge and minimizes the number of forward model runs required, making it highly suitable for routine operational implementation of image analysis methods. For the research community, straightforward and robust inversion allows research to focus on improving the radiative transfer models themselves without the added complication of devising an inversion strategy.
Resumo:
There exists an interest in performing pin-by-pin calculations coupled with thermal hydraulics so as to improve the accuracy of nuclear reactor analysis. In the framework of the EU NURISP project, INRNE and UPM have generated an experimental version of a few group diffusion cross sections library with discontinuity factors intended for VVER analysis at the pin level with the COBAYA3 code. The transport code APOLLO2 was used to perform the branching calculations. As a first proof of principle the library was created for fresh fuel and covers almost the full parameter space of steady state and transient conditions. The main objective is to test the calculation schemes and post-processing procedures, including multi-pin branching calculations. Two library options are being studied: one based on linear table interpolation and another one using a functional fitting of the cross sections. The libraries generated with APOLLO2 have been tested with the pin-by-pin diffusion model in COBAYA3 including discontinuity factors; first comparing 2D results against the APOLLO2 reference solutions and afterwards using the libraries to compute a 3D assembly problem coupled with a simplified thermal-hydraulic model.