925 resultados para Travel Time Prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The noteworthy of this study is to predict seven quality parameters for beef samples using time-domain nuclear magnetic resonance (TD-NMR) relaxometry data and multivariate models. Samples from 61 Bonsmara heifers were separated into five groups based on genetic (breeding composition) and feed system (grain and grass feed). Seven sample parameters were analyzed by reference methods; among them, three sensorial parameters, flavor, juiciness and tenderness and four physicochemical parameters, cooking loss, fat and moisture content and instrumental tenderness using Warner Bratzler shear force (WBSF). The raw beef samples of the same animals were analyzed by TD-NMR relaxometry using Carr-Purcell-Meiboom-Gill (CPMG) and Continuous Wave-Free Precession (CWFP) sequences. Regression models computed by partial least squares (PLS) chemometric technique using CPMG and CWFP data and the results of the classical analysis were constructed. The results allowed for the prediction of aforementioned seven properties. The predictive ability of the method was evaluated using the root mean square error (RMSE) for the calibration (RMSEC) and validation (RMSEP) data sets. The reference and predicted values showed no significant differences at a 95% confidence level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anaerobic efforts are commonly required through repeated sprint during efforts in many sports, making the anaerobic pathway a target of training. Nevertheless, to identify improvements on such energetic way it is necessary to assess anaerobic capacity or power, which is usually complex. For this purpose, authors have postulated the use of short running performances to anaerobic ability assessment. Thus, the aim of this study was to find a relationship between running performances on anaerobic power, anaerobic capacity or repeated sprint ability. Methods Thirteen military performed maximal running of 50 (P50), 100 (P100) and 300 (P300) m on track, beyond of running-based anaerobic sprint test (RAST; RSA and anaerobic power test), maximal anaerobic running test (MART; RSA and anaerobic capacity test) and the W′ from critical power model (anaerobic capacity test). Results By RAST variables, peak and average power (absolute and relative) and maximum velocity were significantly correlated with P50 (r = −0.68, p = 0.03 and −0.76, p = 0.01; −0.83, p < 0.01 and −0.83, p < 0.01; and −0.78, p < 0.01), respectively. The maximum intensity of MART was negatively and significantly correlated with P100 (r = −0.59) and W′ was not statistically correlated with any of the performances. Conclusion MART and W′ were not correlated with short running performances, having a weak performance predicting probably due to its longer duration in relation to assessed performances. Observing RAST outcomes, we postulated that such a protocol can be used during daily training as short running performance predictor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occupational exposure limits of different risk factors for development of low back disorders (LBDs) have not yet been established. One of the main problems in setting such guidelines is the limited understanding of how different risk factors for LBDs interact in causing injury, since the nature and mechanism of these disorders are relatively unknown phenomena. Industrial ergonomists' role becomes further complicated because the potential risk factors that may contribute towards the onset of LBDs interact in a complex manner, which makes it difficult to discriminate in detail among the jobs that place workers at high or low risk of LBDs. The purpose of this paper was to develop a comparative study between predictions based on the neural network-based model proposed by Zurada, Karwowski & Marras (1997) and a linear discriminant analysis model, for making predictions about industrial jobs according to their potential risk of low back disorders due to workplace design. The results obtained through applying the discriminant analysis-based model proved that it is as effective as the neural network-based model. Moreover, the discriminant analysis-based model proved to be more advantageous regarding cost and time savings for future data gathering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. Verify the influence of different filler distributions on the subcritical crack growth (SCG) susceptibility, Weibull parameters (m and sigma(0)) and longevity estimated by the strength-probability-time (SPT) diagram of experimental resin composites. Methods. Four composites were prepared, each one containing 59 vol% of glass powder with different filler sizes (d(50) = 0.5; 0.9; 1.2 and 1.9 mu m) and distributions. Granulometric analyses of glass powders were done by a laser diffraction particle size analyzer (Sald-7001, Shimadzu, USA). SCG parameters (n and sigma(f0)) were determined by dynamic fatigue (10(-2) to 10(2) MPa/s) using a biaxial flexural device (12 x 1.2 mm; n = 10). Twenty extra specimens of each composite were tested at 10(0) MPa/s to determine m and sigma(0). Specimens were stored in water at 37 degrees C for 24 h. Fracture surfaces were analyzed under SEM. Results. In general, the composites with broader filler distribution (C0.5 and C1.9) presented better results in terms of SCG susceptibility and longevity. C0.5 and C1.9 presented higher n values (respectively, 31.2 +/- 6.2(a) and 34.7 +/- 7.4(a)). C1.2 (166.42 +/- 0.01(a)) showed the highest and C0.5 (158.40 +/- 0.02(d)) the lowest sigma(f0) value (in MPa). Weibull parameters did not vary significantly (m: 6.6 to 10.6 and sigma(0): 170.6 to 176.4 MPa). Predicted reductions in failure stress (P-f = 5%) for a lifetime of 10 years were approximately 45% for C0.5 and C1.9 and 65% for C0.9 and C1.2. Crack propagation occurred through the polymeric matrix around the fillers and all the fracture surfaces showed brittle fracture features. Significance. Composites with broader granulometric distribution showed higher resistance to SCG and, consequently, higher longevity in vitro. (C) 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To integrate data from two-dimensional echocardiography (2D ECHO), three-dimensional echocardiography (3D ECHO), and tissue Doppler imaging (TDI) for prediction of left ventricular (LV) reverse remodeling (LVRR) after cardiac resynchronization therapy (CRT). It was also compared the evaluation of cardiac dyssynchrony by TDI and 3D ECHO. Methods: Twenty-four consecutive patients with heart failure, sinus rhythm, QRS = 120 msec, functional class III or IV and LV ejection fraction (LVEF) = 0.35 underwent CRT. 2D ECHO, 3D ECHO with systolic dyssynchrony index (SDI) analysis, and TDI were performed before, 3 and 6 months after CRT. Cardiac dyssynchrony analyses by TDI and SDI were compared with the Pearson's correlation test. Before CRT, a univariate analysis of baseline characteristics was performed for the construction of a logistic regression model to identify the best predictors of LVRR. Results: After 3 months of CRT, there was a moderate correlation between TDI and SDI (r = 0.52). At other time points, there was no strong correlation. Nine of twenty-four (38%) patients presented with LVRR 6 months after CRT. After logistic regression analysis, SDI (SDI > 11%) was the only independent factor in the prediction of LVRR 6 months of CRT (sensitivity = 0.89 and specificity = 0.73). After construction of receiver operator characteristic (ROC) curves, an equation was established to predict LVRR: LVRR =-0.4LVDD (mm) + 0.5LVEF (%) + 1.1SDI (%), with responders presenting values >0 (sensitivity = 0.67 and specificity = 0.87). Conclusions: In this study, there was no strong correlation between TDI and SDI. An equation is proposed for the prediction of LVRR after CRT. Although larger trials are needed to validate these findings, this equation may be useful to candidates for CRT. (Echocardiography 2012;29:678-687)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RAMOS RT, MATTOS DA, REBOUCAS ITS, RANVAUD RD. Space and motion perception and discomfort in air travel. Aviat Space Environ Med 2012; 83:1162-6. Introduction: The perception of comfort during air trips is determined by several factors. External factors like cabin design and environmental parameters (temperature, humidity, air pressure, noise, and vibration) interact with individual characteristics (anxiety traits, fear of flying, and personality) from arrival at the airport to landing at the destination. In this study, we investigated the influence of space and motion discomfort (SMD), fear of heights, and anxiety on comfort perception during all phases of air travel. Methods: We evaluated 51 frequent air travelers through a modified version of the Flight Anxiety Situations Questionnaire (FAS), in which new items were added and where the subjects were asked to report their level of discomfort or anxiety (not fear) for each phase of air travel (Chronbach's alpha = 0.974). Correlations were investigated among these scales: State-Trait Anxiety Inventory (STAB, Cohen's Acrophobia Questionnaire, and the Situational Characteristics Questionnaire (SitQ, designed to estimate SMD levels). Results: Scores of SitQ correlated with discomfort in situations involving space and movement perception (Pearson's rho = 0.311), while discomfort was associated with cognitive mechanisms related to scores in the anxiety scales (Pearson's rho = 0.375). Anxiety traits were important determinants of comfort perception before and after flight, while the influence of SMD was more significant during the time spent in the aircraft cabin. Discussion: SMD seems to be an important modulator of comfort perception in air travel. Its influence on physical well being and probably on cognitive performance, with possible effects on flight safety, deserves further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives Predictors of adverse outcomes following myocardial infarction (MI) are well established; however, little is known about what predicts enzymatically estimated infarct size in patients with acute ST-elevation MI. The Complement And Reduction of INfarct size after Angioplasty or Lytics trials of pexelizumab used creatine kinase (CK)-MB area under the curve to determine infarct size in patients treated with primary percutaneous coronary intervention (PCI) or fibrinolysis. Methods Prediction of infarct size was carried out by measuring CK-MB area under the curve in patients with ST-segment elevation MI treated with reperfusion therapy from January 2000 to April 2002. Infarct size was calculated in 1622 patients (PCI=817; fibrinolysis=805). Logistic regression was used to examine the relationship between baseline demographics, total ST-segment elevation, index angiographic findings (PCI group), and binary outcome of CK-MB area under the curve greater than 3000 ng/ml. Results Large infarcts occurred in 63% (515) of the PCI group and 69% (554) of the fibrinolysis group. Independent predictors of large infarcts differed depending on mode of reperfusion. In PCI, male sex, no prior coronary revascularization and diabetes, decreased systolic blood pressure, sum of ST-segment elevation, total (angiographic) occlusion, and nonright coronary artery culprit artery were independent predictors of larger infarcts (C index=0.73). In fibrinolysis, younger age, decreased heart rate, white race, no history of arrhythmia, increased time to fibrinolytic therapy in patients treated up to 2 h after symptom onset, and sum of ST-segment elevation were independently associated with a larger infarct size (C index=0.68). Conclusion Clinical and patient data can be used to predict larger infarcts on the basis of CK-MB quantification. These models may be helpful in designing future trials and in guiding the use of novel pharmacotherapies aimed at limiting infarct size in clinical practice. Coron Artery Dis 23:118-125 (C) 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Background: Cervical cancer is treated mainly by surgery and radiotherapy. Toxicity due to radiation is a limiting factor for treatment success. Determination of lymphocyte radiosensitivity by radio-induced apoptosis arises as a possible method for predictive test development. The aim of this study was to analyze radio-induced apoptosis of peripheral blood lymphocytes. Methods: Ninety four consecutive patients suffering from cervical carcinoma, diagnosed and treated in our institution, and four healthy controls were included in the study. Toxicity was evaluated using the Lent-Soma scale. Peripheral blood lymphocytes were isolated and irradiated at 0, 1, 2 and 8 Gy during 24, 48 and 72 hours. Apoptosis was measured by flow cytometry using annexin V/propidium iodide to determine early and late apoptosis. Lymphocytes were marked with CD45 APC-conjugated monoclonal antibody. Results: Radiation-induced apoptosis (RIA) increased with radiation dose and time of incubation. Data strongly fitted to a semi logarithmic model as follows: RIA = βln(Gy) + α. This mathematical model was defined by two constants: α, is the origin of the curve in the Y axis and determines the percentage of spontaneous cell death and β, is the slope of the curve and determines the percentage of cell death induced at a determined radiation dose (β = ΔRIA/Δln(Gy)). Higher β values (increased rate of RIA at given radiation doses) were observed in patients with low sexual toxicity (Exp(B) = 0.83, C.I. 95% (0.73-0.95), p = 0.007; Exp(B) = 0.88, C.I. 95% (0.82-0.94), p = 0.001; Exp(B) = 0.93, C.I. 95% (0.88-0.99), p = 0.026 for 24, 48 and 72 hours respectively). This relation was also found with rectal (Exp(B) = 0.89, C.I. 95% (0.81-0.98), p = 0.026; Exp(B) = 0.95, C.I. 95% (0.91-0.98), p = 0.013 for 48 and 72 hours respectively) and urinary (Exp(B) = 0.83, C.I. 95% (0.71-0.97), p = 0.021 for 24 hours) toxicity. Conclusion: Radiation induced apoptosis at different time points and radiation doses fitted to a semi logarithmic model defined by a mathematical equation that gives an individual value of radiosensitivity and could predict late toxicity due to radiotherapy. Other prospective studies with higher number of patients are needed to validate these results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of my thesis presents an overview of the different approaches used in the past two decades in the attempt to forecast epileptic seizure on the basis of intracranial and scalp EEG. Past research could reveal some value of linear and nonlinear algorithms to detect EEG features changing over different phases of the epileptic cycle. However, their exact value for seizure prediction, in terms of sensitivity and specificity, is still discussed and has to be evaluated. In particular, the monitored EEG features may fluctuate with the vigilance state and lead to false alarms. Recently, such a dependency on vigilance states has been reported for some seizure prediction methods, suggesting a reduced reliability. An additional factor limiting application and validation of most seizure-prediction techniques is their computational load. For the first time, the reliability of permutation entropy [PE] was verified in seizure prediction on scalp EEG data, contemporarily controlling for its dependency on different vigilance states. PE was recently introduced as an extremely fast and robust complexity measure for chaotic time series and thus suitable for online application even in portable systems. The capability of PE to distinguish between preictal and interictal state has been demonstrated using Receiver Operating Characteristics (ROC) analysis. Correlation analysis was used to assess dependency of PE on vigilance states. Scalp EEG-Data from two right temporal epileptic lobe (RTLE) patients and from one patient with right frontal lobe epilepsy were analysed. The last patient was included only in the correlation analysis, since no datasets including seizures have been available for him. The ROC analysis showed a good separability of interictal and preictal phases for both RTLE patients, suggesting that PE could be sensitive to EEG modifications, not visible on visual inspection, that might occur well in advance respect to the EEG and clinical onset of seizures. However, the simultaneous assessment of the changes in vigilance showed that: a) all seizures occurred in association with the transition of vigilance states; b) PE was sensitive in detecting different vigilance states, independently of seizure occurrences. Due to the limitations of the datasets, these results cannot rule out the capability of PE to detect preictal states. However, the good separability between pre- and interictal phases might depend exclusively on the coincidence of epileptic seizure onset with a transition from a state of low vigilance to a state of increased vigilance. The finding of a dependency of PE on vigilance state is an original finding, not reported in literature, and suggesting the possibility to classify vigilance states by means of PE in an authomatic and objectic way. The second part of my thesis provides the description of a novel behavioral task based on motor imagery skills, firstly introduced (Bruzzo et al. 2007), in order to study mental simulation of biological and non-biological movement in paranoid schizophrenics (PS). Immediately after the presentation of a real movement, participants had to imagine or re-enact the very same movement. By key release and key press respectively, participants had to indicate when they started and ended the mental simulation or the re-enactment, making it feasible to measure the duration of the simulated or re-enacted movements. The proportional error between duration of the re-enacted/simulated movement and the template movement were compared between different conditions, as well as between PS and healthy subjects. Results revealed a double dissociation between the mechanisms of mental simulation involved in biological and non-biologial movement simulation. While for PS were found large errors for simulation of biological movements, while being more acurate than healthy subjects during simulation of non-biological movements. Healthy subjects showed the opposite relationship, making errors during simulation of non-biological movements, but being most accurate during simulation of non-biological movements. However, the good timing precision during re-enactment of the movements in all conditions and in both groups of participants suggests that perception, memory and attention, as well as motor control processes were not affected. Based upon a long history of literature reporting the existence of psychotic episodes in epileptic patients, a longitudinal study, using a slightly modified behavioral paradigm, was carried out with two RTLE patients, one patient with idiopathic generalized epilepsy and one patient with extratemporal lobe epilepsy. Results provide strong evidence for a possibility to predict upcoming seizures in RTLE patients behaviorally. In the last part of the thesis it has been validated a behavioural strategy based on neurobiofeedback training, to voluntarily control seizures and to reduce there frequency. Three epileptic patients were included in this study. The biofeedback was based on monitoring of slow cortical potentials (SCPs) extracted online from scalp EEG. Patients were trained to produce positive shifts of SCPs. After a training phase patients were monitored for 6 months in order to validate the ability of the learned strategy to reduce seizure frequency. Two of the three refractory epileptic patients recruited for this study showed improvements in self-management and reduction of ictal episodes, even six months after the last training session.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and aims: Sorafenib is the reference therapy for advanced Hepatocellular Carcinoma (HCC). No method exists to predict in the very early period subsequent individual response. Starting from the clinical experience in humans that subcutaneous metastases may rapidly change consistency under sorafenib and that elastosonography a new ultrasound based technique allows assessment of tissue stiffness, we investigated the role of elastonography in the very early prediction of tumor response to sorafenib in a HCC animal model. Methods: HCC (Huh7 cells) subcutaneous xenografting in mice was utilized. Mice were randomized to vehicle or treatment with sorafenib when tumor size was 5-10 mm. Elastosonography (Mylab 70XVG, Esaote, Genova, Italy) of the whole tumor mass on a sagittal plane with a 10 MHz linear transducer was performed at different time points from treatment start (day 0, +2, +4, +7 and +14) until mice were sacrified (day +14), with the operator blind to treatment. In order to overcome variability in absolute elasticity measurement when assessing changes over time, values were expressed in arbitrary units as relative stiffness of the tumor tissue in comparison to the stiffness of a standard reference stand-off pad lying on the skin over the tumor. Results: Sor-treated mice showed a smaller tumor size increase at day +14 in comparison to vehicle-treated (tumor volume increase +192.76% vs +747.56%, p=0.06). Among Sor-treated tumors, 6 mice showed a better response to treatment than the other 4 (increase in volume +177% vs +553%, p=0.011). At day +2, median tumor elasticity increased in Sor-treated group (+6.69%, range –30.17-+58.51%), while decreased in the vehicle group (-3.19%, range –53.32-+37.94%) leading to a significant difference in absolute values (p=0.034). From this time point onward, elasticity decreased in both groups, with similar speed over time, not being statistically different anymore. In Sor-treated mice all 6 best responders at day 14 showed an increase in elasticity at day +2 (ranging from +3.30% to +58.51%) in comparison to baseline, whereas 3 of the 4 poorer responders showed a decrease. Interestingly, these 3 tumours showed elasticity values higher than responder tumours at day 0. Conclusions: Elastosonography appears a promising non-invasive new technique for the early prediction of HCC tumor response to sorafenib. Indeed, we proved that responder tumours are characterized by an early increase in elasticity. The possibility to distinguish a priori between responders and non responders based on the higher elasticity of the latter needs to be validated in ad-hoc experiments as well as a confirmation of our results in humans is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this project is to experimentally demonstrate geometrical nonlinear phenomena due to large displacements during resonant vibration of composite materials and to explain the problem associated with fatigue prediction at resonant conditions. Three different composite blades to be tested were designed and manufactured, being their difference in the composite layup (i.e. unidirectional, cross-ply, and angle-ply layups). Manual envelope bagging technique is explained as applied to the actual manufacturing of the components; problems encountered and their solutions are detailed. Forced response tests of the first flexural, first torsional, and second flexural modes were performed by means of a uniquely contactless excitation system which induced vibration by using a pulsed airflow. Vibration intensity was acquired by means of Polytec LDV system. The first flexural mode is found to be completely linear irrespective of the vibration amplitude. The first torsional mode exhibits a general nonlinear softening behaviour which is interestingly coupled with a hardening behaviour for the unidirectional layup. The second flexural mode has a hardening nonlinear behaviour for either the unidirectional and angle-ply blade, whereas it is slightly softening for the cross-ply layup. By using the same equipment as that used for forced response analyses, free decay tests were performed at different airflow intensities. Discrete Fourier Trasform over the entire decay and Sliding DFT were computed so as to visualise the presence of nonlinear superharmonics in the decay signal and when they were damped out from the vibration over the decay time. Linear modes exhibit an exponential decay, while nonlinearities are associated with a dry-friction damping phenomenon which tends to increase with increasing amplitude. Damping ratio is derived from logarithmic decrement for the exponential branch of the decay.