944 resultados para Space analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban agriculture is a phenomenon that can be observed world-wide, particularly in cities of devel- oping countries. It is contributing significantly to food security and food safety and has sustained livelihood of the urban and peri-urban low income dwe llers in developing countries for many years. Population increase due to rural-urban migration and natural - formal as well as informal - urbani- sation are competing with urban farming for available space and scarce water resources. A mul- titemporal and multisensoral urban change analysis over the period of 25 years (1982-2007) was performed in order to measure and visualise the urban expansion along the Kizinga and Mzinga valley in the south of Dar Es Salaam. Airphotos and VHR satellite data were analysed by using a combination of a composition of anisotropic textural measures and spectral information. The study revealed that unplanned built-up area is expanding continuously, and vegetation covers and agricultural lands decline at a fast rate. The validation showed that the overall classification accuracy varied depending on the database. The extracted built-up areas were used for visual in- terpretation mapping purposes and served as information source for another research project. The maps visualise an urban congestion and expansion of nearly 18% of the total analysed area that had taken place in the Kizinga valley between 1982 and 2007. The same development can be ob- served in the less developed and more remote Mzinga valley between 1981 and 2002. Both areas underwent fast changes where land prices still tend to go up and an influx of people both from rural and urban areas continuously increase the density with the consequence of increasing multiple land use interests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The malaria parasite Plasmodium depends on the tight control of cysteine-protease activity throughout its life cycle. Recently, the characterization of a new class of potent inhibitors of cysteine proteases (ICPs) secreted by Plasmodium has been reported. Here, the recombinant production, purification and crystallization of the inhibitory C-terminal domain of ICP from P. berghei in complex with the P. falciparum haemoglobinase falcipain-2 is described. The 1:1 complex was crystallized in space group P4(3), with unit-cell parameters a = b = 71.15, c = 120.09 A. A complete diffraction data set was collected to a resolution of 2.6 A.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Falls of elderly people may cause permanent disability or death. Particularly susceptible are elderly patients in rehabilitation hospitals. We systematically reviewed the literature to identify falls prediction tools available for assessing elderly inpatients in rehabilitation hospitals. Methods and Findings We searched six electronic databases using comprehensive search strategies developed for each database. Estimates of sensitivity and specificity were plotted in ROC space graphs and pooled across studies. Our search identified three studies which assessed the prediction properties of falls prediction tools in a total of 754 elderly inpatients in rehabilitation hospitals. Only the STRATIFY tool was assessed in all three studies; the other identified tools (PJC-FRAT and DOWNTON) were assessed by a single study. For a STRATIFY cut-score of two, pooled sensitivity was 73% (95%CI 63 to 81%) and pooled specificity was 42% (95%CI 34 to 51%). An indirect comparison of the tools across studies indicated that the DOWNTON tool has the highest sensitivity (92%), while the PJC-FRAT offers the best balance between sensitivity and specificity (73% and 75%, respectively). All studies presented major methodological limitations. Conclusions We did not identify any tool which had an optimal balance between sensitivity and specificity, or which were clearly better than a simple clinical judgment of risk of falling. The limited number of identified studies with major methodological limitations impairs sound conclusions on the usefulness of falls risk prediction tools in geriatric rehabilitation hospitals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most intriguing phenomena in glass forming systems is the dynamic crossover (T(B)), occurring well above the glass temperature (T(g)). So far, it was estimated mainly from the linearized derivative analysis of the primary relaxation time τ(T) or viscosity η(T) experimental data, originally proposed by Stickel et al. [J. Chem. Phys. 104, 2043 (1996); J. Chem. Phys. 107, 1086 (1997)]. However, this formal procedure is based on the general validity of the Vogel-Fulcher-Tammann equation, which has been strongly questioned recently [T. Hecksher et al. Nature Phys. 4, 737 (2008); P. Lunkenheimer et al. Phys. Rev. E 81, 051504 (2010); J. C. Martinez-Garcia et al. J. Chem. Phys. 134, 024512 (2011)]. We present a qualitatively new way to identify the dynamic crossover based on the apparent enthalpy space (H(a)(') = dlnτ/d(1/T)) analysis via a new plot lnH(a)(') vs. 1∕T supported by the Savitzky-Golay filtering procedure for getting an insight into the noise-distorted high order derivatives. It is shown that depending on the ratio between the "virtual" fragility in the high temperature dynamic domain (m(high)) and the "real" fragility at T(g) (the low temperature dynamic domain, m = m(low)) glass formers can be splitted into two groups related to f < 1 and f > 1, (f = m(high)∕m(low)). The link of this phenomenon to the ratio between the apparent enthalpy and activation energy as well as the behavior of the configurational entropy is indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis presents the computational work and synthesis with experiments for internal (tube and channel geometries) as well as external (flow of a pure vapor over a horizontal plate) condensing flows. The computational work obtains accurate numerical simulations of the full two dimensional governing equations for steady and unsteady condensing flows in gravity/0g environments. This doctoral work investigates flow features, flow regimes, attainability issues, stability issues, and responses to boundary fluctuations for condensing flows in different flow situations. This research finds new features of unsteady solutions of condensing flows; reveals interesting differences in gravity and shear driven situations; and discovers novel boundary condition sensitivities of shear driven internal condensing flows. Synthesis of computational and experimental results presented here for gravity driven in-tube flows lays framework for the future two-phase component analysis in any thermal system. It is shown for both gravity and shear driven internal condensing flows that steady governing equations have unique solutions for given inlet pressure, given inlet vapor mass flow rate, and fixed cooling method for condensing surface. But unsteady equations of shear driven internal condensing flows can yield different “quasi-steady” solutions based on different specifications of exit pressure (equivalently exit mass flow rate) concurrent to the inlet pressure specification. This thesis presents a novel categorization of internal condensing flows based on their sensitivity to concurrently applied boundary (inlet and exit) conditions. The computational investigations of an external shear driven flow of vapor condensing over a horizontal plate show limits of applicability of the analytical solution. Simulations for this external condensing flow discuss its stability issues and throw light on flow regime transitions because of ever-present bottom wall vibrations. It is identified that laminar to turbulent transition for these flows can get affected by ever present bottom wall vibrations. Detailed investigations of dynamic stability analysis of this shear driven external condensing flow result in the introduction of a new variable, which characterizes the ratio of strength of the underlying stabilizing attractor to that of destabilizing vibrations. Besides development of CFD tools and computational algorithms, direct application of research done for this thesis is in effective prediction and design of two-phase components in thermal systems used in different applications. Some of the important internal condensing flow results about sensitivities to boundary fluctuations are also expected to be applicable to flow boiling phenomenon. Novel flow sensitivities discovered through this research, if employed effectively after system level analysis, will result in the development of better control strategies in ground and space based two-phase thermal systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While voxel-based 3-D MRI analysis methods as well as assessment of subtracted ictal versus interictal perfusion studies (SISCOM) have proven their potential in the detection of lesions in focal epilepsy, a combined approach has not yet been reported. The present study investigates if individual automated voxel-based 3-D MRI analyses combined with SISCOM studies contribute to an enhanced detection of mesiotemporal epileptogenic foci. Seven consecutive patients with refractory complex partial epilepsy were prospectively evaluated by SISCOM and voxel-based 3-D MRI analysis. The functional perfusion maps and voxel-based statistical maps were coregistered in 3-D space. In five patients with temporal lobe epilepsy (TLE), the area of ictal hyperperfusion and corresponding structural abnormalities detected by 3-D MRI analysis were identified within the same temporal lobe. In two patients, additional structural and functional abnormalities were detected beyond the mesial temporal lobe. Five patients with TLE underwent epileptic surgery with favourable postoperative outcome (Engel class Ia and Ib) after 3-5 years of follow-up, while two patients remained on conservative treatment. In summary, multimodal assessment of structural abnormalities by voxel-based analysis and SISCOM may contribute to advanced observer-independent preoperative assessment of seizure origin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In ongoing chronic rejection after lung transplantation, alveolar interstitial fibrosis develops. However, little is known about the mechanisms involved. In order to investigate these mechanisms, expression of extracellular matrix molecules (ECM) (undulin, decorin, tenascin, laminin, and fibronectin) and cytokines [transforming growth factor (TGF)-beta 1, TGF-beta 3, platelet-derived growth factor (PDGF), and PDGF receptor] were semiquantitatively evaluated in chronically rejected lung allografts, using standard immunohistochemical techniques. Additionally, the presence of macrophages was analysed. The present study demonstrates an increased infiltration of macrophages with a concomitant upregulation of cytokines (TGF-beta 1, TGF-beta 3, and PDGF) and an increased deposition of ECM in chronic lung rejection. These cytokines have an important role in the stimulation of fibroblasts which are a major source of ECM. Upregulated expression of ECM in the alveolar interstitial space leads to alveolar malfunction by thickening of the wall and, thus, is one of the causative factors of respiratory dysfunction in chronic lung graft rejection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field