965 resultados para Index reduction techniques


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work provides the development of an antenna for satellite communications onboard systems based on the recommendations ITU-R S.580-6 [1] and ITU-R S.465-5 [2]. The antenna consists of printed elements grouped in an array, working in a frequency band from 7.25 up to 8.4 GHz (15% of bandwidth). In this working band, transmission and reception are included simultaneously. The antenna reaches a gain about 31 dBi, has a radiation pattern with a beam width smaller than 10oand dual circular polarization. It has the capability to steer in elevation through a Butler matrix to 45

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we analyze the performance of several well-known pattern recognition and dimensionality reduction techniques when applied to mass-spectrometry data for odor biometric identification. Motivated by the successful results of previous works capturing the odor from other parts of the body, this work attempts to evaluate the feasibility of identifying people by the odor emanated from the hands. By formulating this task according to a machine learning scheme, the problem is identified with a small-sample-size supervised classification problem in which the input data is formed by mass spectrograms from the hand odor of 13 subjects captured in different sessions. The high dimensionality of the data makes it necessary to apply feature selection and extraction techniques together with a simple classifier in order to improve the generalization capabilities of the model. Our experimental results achieve recognition rates over 85% which reveals that there exists discriminatory information in the hand odor and points at body odor as a promising biometric identifier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. METHODS: Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i) linear regression; (ii) logistic classification; (iii) regression trees; (iv) classification trees (iii and iv are collectively known as "CART"). Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. RESULTS: Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60-80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. CONCLUSIONS: There were no striking differences between either the algebraic (i, ii) vs. non-algebraic (iii, iv), or the regression (i, iii) vs. classification (ii, iv) modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Iowa Department of Transportation (DOT) has made improving work zone (WZ) safety a high priority. Managing vehicle speeds through work zones is perceived to be an important factor in achieving this goal. A number of speed reduction techniques are currently used by transportation agencies throughout the country to control speeds and reduce speed variation at work zones. The purpose of this project is to study these and other applicable work zone speed reduction strategies. Furthermore, this research explores transportation agencies' policies regarding managing speeds in long-term, short-term, and moving work zones. This report consists of three chapters. The first chapter, a literature review, examines the current speed reduction practices at work zones and provides a review of the relevant literature. The speed control strategies reviewed in this chapter range from posting regulatory and advisory speed limit signs to using the latest radar technologies to reduce speeds at work zones. The second chapter includes a short write-up for each identified speed control technique. The write-up includes a description, the results of any field tests, the benefits and the costs of the technology or technique. To learn more about other state policies regarding work zone speed reduction and management, the Center for Transportation Research and Education conducted a survey. The survey consists of six multipart questions. The third chapter provides summaries of the response to each question.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkimuksen tarkoituksena oli selvittää typenoksidien vähennysmahdollisuudet Stora Enson Varkauden tehtaiden sellutehtaalla ja voimalaitoksella. Tutkimuksessa käsiteltiin tehdasalueen suurimpia typenoksidien päästölähteitä: soodakattilaa, meesauunia, kuorikattilaa, öljykattilaa ja muovi-alumiinijakeen kaasutuslaitosta. Tutkimuksessa selvitettiin typenoksidipäästöjen syntymekanismit ja erilaisiin polttotekniikoihin soveltuvat typenoksidien vähennystekniikat. Varkauden tehtaiden typenoksidien vuosipäästö vuonna 2001 oli 836 tonnia. Kansallinen lainsäädäntö, kansainväliset sopimukset sekä paras käytettävissä oleva tekniikka (BAT) huomioiden selvitettiin kuhunkin kohteeseen parhaiten soveltuvat ratkaisut. Tutkimuksen perusteella laadittiin toimenpideohjelma, joka määrittelee suositeltavan toteutusjärjestyksen typenoksidien vähennystoimenpiteille. Toimenpideohjelman tärkeimpinä kriteereinä pidettiin vuonna 2004 tulevan uuden ympäristöluvan arvioituja luparajoja sekä toimenpiteiden kustannustehokkuutta. Toteutusjärjestyksessä ensimmäiseksi valittiin koeajojakson järjestäminen ajon optimoimiseksi kiertopetikattilalla ja toiseksi meesauunin ajon optimointi jatkuvatoimisen NOx-analysaattorin avulla. Seuraaviksi toimenpiteiksi ehdotettiin vertikaali-ilmajärjestelmän käyttöönottoa soodakattilalla sekä SNCR-järjestelmän asennusta kuorikattilalle. Saavutettava NOx-vähennys tulisi olemaan 10 – 45 % ja hinta 30 – 3573 EUR vähennettyä NOx-tonnia kohti. Tutkimuksen osana Ilmatieteen laitoksella teetetyn typenoksidien leviämisselvityksen mukaan Stora Enson tehtaiden NOx-päästöjen vaikutus Varkauden ilmanlaatuun on hyvin pieni. Suurin osa NOx-päästöistä aiheutuu liikenteestä.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: The purpose of this in vitro study was to compare the dimensional accuracy of a stone index and of 3 impression techniques (tapered impression copings, squared impression copings, and squared impression copings splinted with acrylic resin) associated with 3 pouring techniques (conventional, pouring using latex tubes fitted onto analogs, and pouring after joining the analogs with acrylic resin) for implant-supported prostheses. Materials and Methods: A mandibular brass cast with 4 stainless steel implant-abutment analogs, a framework, and 2 aluminum custom trays were fabricated. Polyether impression material was used for all impressions. Ten groups were formed (a control group and 9 test groups formed by combining each pouring technique and impression technique). Five casts were made per group for a total of 50 casts and 200 gap values (1 gap value for each implant-abutment analog). Results: The mean gap value with the index technique was 27.07 mu m. With the conventional pouring technique, the mean gap values were 116.97 mu m for the tapered group, 5784 mu m for the squared group, and 73.17 mu m for the squared splinted group. With pouring using latex tubes, the mean gap values were 65.69 mu m for the tapered group, 38.03 mu m for the squared group, and 82.47 mu m for the squared splinted group. With pouring after joining the analogs with acrylic resin, the mean gap values were 141.12 jum for the tapered group, 74.19 mu m for the squared group, and 104.67 mu m for the squared splinted group. No significant difference was detected among Index, squarellatex techniques, and master cast (P > .05). Conclusions: The most accurate impression technique utilized squared copings. The most accurate pouring technique for making the impression with tapered or squared copings utilized latex tubes. The pouring did not influence the accuracy of the stone casts when using splinted squared impression copings. Either the index technique or the use of squared coping combined with the latex-tube pouring technique are preferred methods for making implant-supported fixed restorations with dimensional accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Set Covering Problem (SCP) plays an important role in Operational Research since it can be found as part of several real-world problems. In this work we report the use of a genetic algorithm to solve SCP. The algorithm starts with a population chosen by a randomized greedy algorithm. A new crossover operator and a new adaptive mutation operator were incorporated into the algorithm to intensify the search. Our algorithm was tested for a class of non-unicost SCP obtained from OR-Library without applying reduction techniques. The algorithms found good solutions in terms of quality and computational time. The results reveal that the proposed algorithm is able to find a high quality solution and is faster than recently published approaches algorithm is able to find a high quality solution and is faster than recently published approaches using the OR-Library.