942 resultados para eddy covariance
Resumo:
Due to limited budgets and reduced inspection staff, state departments of transportation (DOTs) are in need of innovative approaches for providing more efficient quality assurance on concrete paving projects. The goal of this research was to investigate and test new methods that can determine pavement thickness in real time. Three methods were evaluated: laser scanning, ultrasonic sensors, and eddy current sensors. Laser scanning, which scans the surface of the base prior to paving and then scans the surface after paving, can determine the thickness at any point. Also, scanning lasers provide thorough data coverage that can be used to calculate thickness variance accurately and identify any areas where the thickness is below tolerance. Ultrasonic and eddy current sensors also have the potential to measure thickness nondestructively at discrete points and may result in an easier method of obtaining thickness. There appear to be two viable approaches for measuring concrete pavement thickness during the paving operation: laser scanning and eddy current sensors. Laser scanning has proved to be a reliable technique in terms of its ability to provide virtual core thickness with low variability. Research is still required to develop a prototype system that integrates point cloud data from two scanners. Eddy current sensors have also proved to be a suitable alternative, and are probably closer to field implementation than the laser scanning approach. As a next step for this research project, it is suggested that a pavement thickness measuring device using eddy current sensors be created, which would involve both a handheld and paver-mounted version of the device.
Resumo:
Background: Bone health is a concern when treating early stage breast cancer patients with adjuvant aromatase inhibitors. Early detection of patients (pts) at risk of osteoporosis and fractures may be helpful for starting preventive therapies and selecting the most appropriate endocrine therapy schedule. We present statistical models describing the evolution of lumbar and hip bone mineral density (BMD) in pts treated with tamoxifen (T), letrozole (L) and sequences of T and L. Methods: Available dual-energy x-ray absorptiometry exams (DXA) of pts treated in trial BIG 1-98 were retrospectively collected from Swiss centers. Treatment arms: A) T for 5 years, B) L for 5 years, C) 2 years of T followed by 3 years of L and, D) 2 years of L followed by 3 years of T. Pts without DXA were used as a control for detecting selection biases. Patients randomized to arm A were subsequently allowed an unplanned switch from T to L. Allowing for variations between DXA machines and centres, two repeated measures models, using a covariance structure that allow for different times between DXA, were used to estimate changes in hip and lumbar BMD (g/cm2) from trial randomization. Prospectively defined covariates, considered as fixed effects in the multivariable models in an intention to treat analysis, at the time of trial randomization were: age, height, weight, hysterectomy, race, known osteoporosis, tobacco use, prior bone fracture, prior hormone replacement therapy (HRT), bisphosphonate use and previous neo-/adjuvant chemotherapy (ChT). Similarly, the T-scores for lumbar and hip BMD measurements were modeled using a per-protocol approach (allowing for treatment switch in arm A), specifically studying the effect of each therapy upon T-score percentage. Results: A total of 247 out of 546 pts had between 1 and 5 DXA; a total of 576 DXA were collected. Number of DXA measurements per arm were; arm A 133, B 137, C 141 and D 135. The median follow-up time was 5.8 years. Significant factors positively correlated with lumbar and hip BMD in the multivariate analysis were weight, previous HRT use, neo-/adjuvant ChT, hysterectomy and height. Significant negatively correlated factors in the models were osteoporosis, treatment arm (B/C/D vs. A), time since endocrine therapy start, age and smoking (current vs. never).Modeling the T-score percentage, differences from T to L were -4.199% (p = 0.036) and -4.907% (p = 0.025) for the hip and lumbar measurements respectively, before any treatment switch occurred. Conclusions: Our statistical models describe the lumbar and hip BMD evolution for pts treated with L and/or T. The results of both localisations confirm that, contrary to expectation, the sequential schedules do not seem less detrimental for the BMD than L monotherapy. The estimated difference in BMD T-score percent is at least 4% from T to L.
Resumo:
The objective of this study was to evaluate the efficiency of spatial statistical analysis in the selection of genotypes in a plant breeding program and, particularly, to demonstrate the benefits of the approach when experimental observations are not spatially independent. The basic material of this study was a yield trial of soybean lines, with five check varieties (of fixed effect) and 110 test lines (of random effects), in an augmented block design. The spatial analysis used a random field linear model (RFML), with a covariance function estimated from the residuals of the analysis considering independent errors. Results showed a residual autocorrelation of significant magnitude and extension (range), which allowed a better discrimination among genotypes (increase of the power of statistical tests, reduction in the standard errors of estimates and predictors, and a greater amplitude of predictor values) when the spatial analysis was applied. Furthermore, the spatial analysis led to a different ranking of the genetic materials, in comparison with the non-spatial analysis, and a selection less influenced by local variation effects was obtained.
Resumo:
The development of shear instabilities of a wave-driven alongshore current is investigated. In particular, we use weakly nonlinear theory to investigate the possibility that such instabilities, which have been observed at various sites on the U.S. coast and in the laboratory, can grow in linearly stable flows as a subcritical bifurcation by resonant triad interaction, as first suggested by Shrira eta/. [1997]. We examine a realistic longshore current profile and include the effects of eddy viscosity and bottom friction. We show that according to the weakly nonlinear theory, resonance is possible and that these linearly stable flows may exhibit explosive instabilities. We show that this phenomenon may occur also when there is only approximate resonance, which is more likely in nature. Furthermore, the size of the perturbation that is required to trigger the instability is shown in some circumstances to be consistent with the size of naturally occurring perturbations. Finally, we consider the differences between the present case examined and the more idealized case of Shrira et a/. [ 1997]. It is shown that there is a possibility of coupling between triads, due to the richer modal structure in more realistic flows, which may act to stabilize the flow and act against the development of subcritical bifurcations. Extensive numerical tests are called for.
Resumo:
The purpose of this study was to assess the cross-cultural validity of the Marlowe-Crowne Social Desirability scale short form C, in a large sample of French-speaking participants from eight African countries and Switzerland. Exploratory and confirmatory analyses suggested retaining a two-factor structure. Item bias detection according to country was conducted for all 13 items and effect was calculated with R2. For the two-factor solution, 9 items were associated with a negligible effect size, 3 items with a moderate one, and 1 item with a large one. A series of analyses of covariance considering the acquiescence variable as a covariate showed that the acquiescence tendency does not contribute to the bias at item level. This research indicates that the psychometric properties of this instrument do not reach a scalar equivalence but that a culturally reliable measurement of social desirability could be developed.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
BACKGROUND: The risk of osteoporosis and fracture influences the selection of adjuvant endocrine therapy. We analyzed bone mineral density (BMD) in Swiss patients of the Breast International Group (BIG) 1-98 trial [treatment arms: A, tamoxifen (T) for 5 years; B, letrozole (L) for 5 years; C, 2 years of T followed by 3 years of L; D, 2 years of L followed by 3 years of T]. PATIENTS AND METHODS: Dual-energy X-ray absorptiometry (DXA) results were retrospectively collected. Patients without DXA served as control group. Repeated measures models using covariance structures allowing for different times between DXA were used to estimate changes in BMD. Prospectively defined covariates were considered as fixed effects in the multivariable models. RESULTS: Two hundred and sixty-one of 546 patients had one or more DXA with 577 lumbar and 550 hip measurements. Weight, height, prior hormone replacement therapy, and hysterectomy were positively correlated with BMD; the correlation was negative for letrozole arms (B/C/D versus A), known osteoporosis, time on trial, age, chemotherapy, and smoking. Treatment did not influence the occurrence of osteoporosis (T score < -2.5 standard deviation). CONCLUSIONS: All aromatase inhibitor regimens reduced BMD. The sequential schedules were as detrimental for bone density as L monotherapy.
Resumo:
The purpose of this study was to measure postabsorptive fat oxidation at rest and to assess the association between fat mass and fat oxidation rate in prepubertal children, who were assigned to two groups: 35 obese children (weight, 44.5 +/- 9.7 kg; fat mass; 31.7 +/- 5.4%) and 37 nonobese children (weight, 30.8 +/- 6.8 kg; fat mass, 17.5 +/- 6.7%). Postabsorptive fat oxidation expressed in absolute value was significantly higher in obese than in nonobese children (31.4 +/- 9.7 mg/min vs 21.9 +/- 10.2 mg/min; p < 0.001) but not when adjusted for fat-free mass by analysis of covariance with fat-free mass as the covariate (28.2 +/- 10.6 mg/min vs 24.9 +/- 10.5 mg/min). In obese children and in the total group, fat mass and fat oxidation were significantly correlated (r = 0.65; p < 0.001). The slope of the relationship indicated that for each 10 kg additional fat mass, resting fat oxidation increased by 18 gm/day. We conclude that obese prepubertal children have a higher postabsorptive rate of fat oxidation than nonobese children. This metabolic process may favor the achievement of a new equilibrium in fat balance, opposing further adipose tissue gain.
Resumo:
This work consists of three essays investigating the ability of structural macroeconomic models to price zero coupon U.S. government bonds. 1. A small scale 3 factor DSGE model implying constant term premium is able to provide reasonable a fit for the term structure only at the expense of the persistence parameters of the structural shocks. The test of the structural model against one that has constant but unrestricted prices of risk parameters shows that the exogenous prices of risk-model is only weakly preferred. We provide an MLE based variance-covariance matrix of the Metropolis Proposal Density that improves convergence speeds in MCMC chains. 2. Affine in observable macro-variables, prices of risk specification is excessively flexible and provides term-structure fit without significantly altering the structural parameters. The exogenous component of the SDF is separating the macro part of the model from the term structure and the good term structure fit has as a driving force an extremely volatile SDF and an implied average short rate that is inexplicable. We conclude that the no arbitrage restrictions do not suffice to temper the SDF, thus there is need for more restrictions. We introduce a penalty-function methodology that proves useful in showing that affine prices of risk specifications are able to reconcile stable macro-dynamics with good term structure fit and a plausible SDF. 3. The level factor is reproduced most importantly by the preference shock to which it is strongly and positively related but technology and monetary shocks, with negative loadings, are also contributing to its replication. The slope factor is only related to the monetary policy shocks and it is poorly explained. We find that there are gains in in- and out-of-sample forecast of consumption and inflation if term structure information is used in a time varying hybrid prices of risk setting. In-sample yield forecast are better in models with non-stationary shocks for the period 1982-1988. After this period, time varying market price of risk models provide better in-sample forecasts. For the period 2005-2008, out of sample forecast of consumption and inflation are better if term structure information is incorporated in the DSGE model but yields are better forecasted by a pure macro DSGE model.
Resumo:
BACKGROUND: Previous cross-sectional studies report that cognitive impairment is associated with poor psychosocial functioning in euthymic bipolar patients. There is a lack of long-term studies to determine the course of cognitive impairment and its impact on functional outcome. Method A total of 54 subjects were assessed at baseline and 6 years later; 28 had DSM-IV TR bipolar I or II disorder (recruited, at baseline, from a Lithium Clinic Program) and 26 were healthy matched controls. They were all assessed with a cognitive battery tapping into the main cognitive domains (executive function, attention, processing speed, verbal memory and visual memory) twice over a 6-year follow-up period. All patients were euthymic (Hamilton Rating Scale for Depression score lower than 8 and Young mania rating scale score lower than 6) for at least 3 months before both evaluations. At the end of follow-up, psychosocial functioning was also evaluated by means of the Functioning Assessment Short Test. RESULTS: Repeated-measures multivariate analysis of covariance showed that there were main effects of group in the executive domain, in the inhibition domain, in the processing speed domain, and in the verbal memory domain (p<0.04). Among the clinical factors, only longer illness duration was significantly related to slow processing (p=0.01), whereas strong relationships were observed between impoverished cognition along time and poorer psychosocial functioning (p<0.05). CONCLUSIONS: Executive functioning, inhibition, processing speed and verbal memory were impaired in euthymic bipolar out-patients. Although cognitive deficits remained stable on average throughout the follow-up, they had enduring negative effects on psychosocial adaptation of patients.
Resumo:
Peer-reviewed
Resumo:
The integration of electric motors and industrial appliances such as pumps, fans, and compressors is rapidly increasing. For instance, the integration of an electric motor and a centrifugal pump provides cost savings and improved performance characteristics. Material cost savings are achieved when an electric motor is integrated into the shaft of a centrifugal pump, and the motor utilizes the bearings of the pump. This arrangement leads to a smaller configuration that occupies less floor space. The performance characteristics of a pump drive can be improved by using the variable-speed technology. This enables the full speed control of the drive and the absence of a mechanical gearbox and couplers. When using rotational speeds higher than those that can be directly achieved by the network frequency the structure of the rotor has to be mechanically durable. In this thesis the performance characteristics of an axial-flux solid-rotor-core induction motor are determined. The motor studied is a one-rotor-one-stator axial-flux induction motor, and thus, there is only one air-gap between the rotor and the stator. The motor was designed for higher rotational speeds, and therefore a good mechanical strength of the solid-rotor-core rotor is required to withstand the mechanical stresses. The construction of the rotor and the high rotational speeds together produce a feature, which is not typical of traditional induction motors: the dominating loss component of the motor is the rotor eddy current loss. In the case of a typical industrial induction motor instead the dominating loss component is the stator copper loss. In this thesis, several methods to decrease the rotor eddy current losses in the case of axial-flux induction motors are presented. A prototype motor with 45 kW output power at 6000 min-1 was designed and constructed for ascertaining the results obtained from the numerical FEM calculations. In general, this thesis concentrates on the methods for improving the electromagnetic properties of an axial-flux solid-rotor-core induction motor and examines the methods for decreasing the harmonic eddy currents of the rotor. The target is to improve the efficiency of the motor and to reach the efficiency standard of the present-day industrial induction motors equipped with laminated rotors.
Resumo:
Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.
Resumo:
The objective of this work was to determine the efficiency of the Papadakis method on the quality evaluation of experiments with multiple-harvest oleraceous crops, and on the estimate of the covariate and the ideal plot size. Data from nine uniformity trials (five with bean pod, two with zucchini, and two with sweet pepper) and from one experiment with treatments (with sweet pepper) were used. Through the uniformity trials, the best way to calculate the covariate was defined and the optimal plot size was calculated. In the experiment with treatments, analyses of variance and covariance were performed, in which the covariate was calculated by the Papadakis method, and experimental precision was evaluated based on four statistics. The use of analysis of covariance with the covariate obtained by the Papadakis method increases the quality of experiments with multiple-harvest oleraceous crops and allows the use of smaller plot sizes. The best covariate is the one that considers a neighboring plot of each side of the reference plot.
Resumo:
A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.