942 resultados para mean-square error


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nitrous oxide fluxes were measured at the Lägeren CarboEurope IP flux site over the multi-species mixed forest dominated by European beech and Norway spruce. Measurements were carried out during a four-week period in October–November 2005 during leaf senescence. Fluxes were measured with a standard ultrasonic anemometer in combination with a quantum cascade laser absorption spectrometer that measured N2O, CO2, and H2O mixing ratios simultaneously at 5 Hz time resolution. To distinguish insignificant fluxes from significant ones it is proposed to use a new approach based on the significance of the correlation coefficient between vertical wind speed and mixing ratio fluctuations. This procedure eliminated roughly 56% of our half-hourly fluxes. Based on the remaining, quality checked N2O fluxes we quantified the mean efflux at 0.8±0.4 μmol m−2 h−1 (mean ± standard error). Most of the contribution to the N2O flux occurred during a 6.5-h period starting 4.5 h before each precipitation event. No relation with precipitation amount could be found. Visibility data representing fog density and duration at the site indicate that wetting of the canopy may have as strong an effect on N2O effluxes as does below-ground microbial activity. It is speculated that above-ground N2O production from the senescing leaves at high moisture (fog, drizzle, onset of precipitation event) may be responsible for part of the measured flux.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to evaluate the feasibility and reproducibility of high-resolution magnetic resonance imaging (MRI) and quantitative T2 mapping of the talocrural cartilage within a clinically applicable scan time using a new dedicated ankle coil and high-field MRI. MATERIALS AND METHODS: Ten healthy volunteers (mean age 32.4 years) underwent MRI of the ankle. As morphological sequences, proton density fat-suppressed turbo spin echo (PD-FS-TSE), as a reference, was compared with 3D true fast imaging with steady-state precession (TrueFISP). Furthermore, biochemical quantitative T2 imaging was prepared using a multi-echo spin-echo T2 approach. Data analysis was performed three times each by three different observers on sagittal slices, planned on the isotropic 3D-TrueFISP; as a morphological parameter, cartilage thickness was assessed and for T2 relaxation times, region-of-interest (ROI) evaluation was done. Reproducibility was determined as a coefficient of variation (CV) for each volunteer; averaged as root mean square (RMSA) given as a percentage; statistical evaluation was done using analysis of variance. RESULTS: Cartilage thickness of the talocrural joint showed significantly higher values for the 3D-TrueFISP (ranging from 1.07 to 1.14 mm) compared with the PD-FS-TSE (ranging from 0.74 to 0.99 mm); however, both morphological sequences showed comparable good results with RMSA of 7.1 to 8.5%. Regarding quantitative T2 mapping, measurements showed T2 relaxation times of about 54 ms with an excellent reproducibility (RMSA) ranging from 3.2 to 4.7%. CONCLUSION: In our study the assessment of cartilage thickness and T2 relaxation times could be performed with high reproducibility in a clinically realizable scan time, demonstrating new possibilities for further investigations into patient groups.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study develops an automated analysis tool by combining total internal reflection fluorescence microscopy (TIRFM), an evanescent wave microscopic imaging technique to capture time-sequential images and the corresponding image processing Matlab code to identify movements of single individual particles. The developed code will enable us to examine two dimensional hindered tangential Brownian motion of nanoparticles with a sub-pixel resolution (nanoscale). The measured mean square displacements of nanoparticles are compared with theoretical predictions to estimate particle diameters and fluid viscosity using a nonlinear regression technique. These estimated values will be confirmed by the diameters and viscosities given by manufacturers to validate this analysis tool. Nano-particles used in these experiments are yellow-green polystyrene fluorescent nanospheres (200 nm, 500 nm and 1000 nm in diameter (nominal); 505 nm excitation and 515 nm emission wavelengths). Solutions used in this experiment are de-ionized (DI) water, 10% d-glucose and 10% glycerol. Mean square displacements obtained near the surface shows significant deviation from theoretical predictions which are attributed to DLVO forces in the region but it conforms to theoretical predictions after ~125 nm onwards. The proposed automation analysis tool will be powerfully employed in the bio-application fields needed for examination of single protein (DNA and/or vesicle) tracking, drug delivery, and cyto-toxicity unlike the traditional measurement techniques that require fixing the cells. Furthermore, this tool can be also usefully applied for the microfluidic areas of non-invasive thermometry, particle tracking velocimetry (PTV), and non-invasive viscometry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The capability to detect combustion in a diesel engine has the potential of being an important control feature to meet increasingly stringent emission regulations, develop alternative combustion strategies, and use of biofuels. In this dissertation, block mounted accelerometers were investigated as potential feedback sensors for detecting combustion characteristics in a high-speed, high pressure common rail (HPCR), 1.9L diesel engine. Accelerometers were positioned in multiple placements and orientations on the engine, and engine testing was conducted under motored, single and pilot-main injection conditions. Engine tests were conducted at varying injection timings, engine loads, and engine speeds to observe the resulting time and frequency domain changes of the cylinder pressure and accelerometer signals. The frequency content of the cylinder pressure based signals and the accelerometer signals between 0.5 kHz and 6 kHz indicated a strong correlation with coherence values of nearly 1. The accelerometers were used to produce estimated combustion signals using the Frequency Response Functions (FRF) measured from the frequency domain characteristics of the cylinder pressure signals and the response of the accelerometers attached to the engine block. When compared to the actual combustion signals, the estimated combustion signals produced from the accelerometer response had Root Mean Square Errors (RMSE) between 7% and 25% of the actual signals peak value. Weighting the FRF’s from multiple test conditions along their frequency axis with the coherent output power reduced the median RMSE of the estimated combustion signals and the 95th percentile of RMSE produced from each test condition. The RMSE’s of the magnitude based combustion metrics including peak cylinder pressure, MPG, peak ROHR, and work estimated from the combustion signals produced by the accelerometer responses were between 15% and 50% of their actual value. The MPG measured from the estimated pressure gradient shared a direct relationship to the actual MPG. The location based combustion metrics such as the location of peak values and burn durations were capable of RMSE measurements as low as 0.9°. Overall, accelerometer based combustion sensing system was capable of detecting combustion and providing feedback regarding the in cylinder combustion process

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To assess whether stress further increases hypercoagulation in older individuals. We investigated whether acute stress-induced changes in coagulation parameters differ with age. It is known that hypercoagulation occurs in response to acute stress and that a shift in hemostasis toward a hypercoagulability state occurs with age. However, it is not yet known whether acute stress further increases hypercoagulation in older individuals, and thus may increase their risk for cardiovascular disease (CVD). METHODS: A total of 63 medication-free nonsmoking men, aged between 20 and 65 years (mean +/- standard error of the mean = 36.7 +/- 1.7 years), underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We measured plasma clotting factor VII activity (FVII:C), fibrinogen, and D-dimer at rest, immediately, and 20 minutes after stress. RESULTS: Increased age predicted greater increases in fibrinogen (beta = 0.26, p = 0.041; DeltaR(2) = 0.05), FVII:C (beta = 0.40, p = .006; DeltaR(2) = 0.11), and D-dimer (beta = 0.51, p < .001; DeltaR(2) = 0.18) from rest to 20 minutes after stress independent of body mass index and mean arterial blood pressure. General linear models revealed significant effects of age and stress on fibrinogen, FVII:C, and D-dimer (main effects: p < .04), and greater D-dimer stress reactivity with older age (interaction age-by-stress: F(1.5/90.4) = 4.36, p = .024; f = 0.33). CONCLUSIONS: Our results suggest that acute stress might increase vulnerability in the elderly for hypercoagulability and subsequent hemostasis-associated diseases like CVD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We used the Green's functions from auto-correlations and cross-correlations of seismic ambient noise to monitor temporal velocity changes in the subsurface at Villarrica volcano in the Southern Andes of Chile. Campaigns were conducted from March to October 2010 and February to April 2011 with 8 broadband and 6 short-period stations, respectively. We prepared the data by removing the instrument response, normalizing with a root-mean-square method, whitening the spectra, and filtering from 1 to 10 Hz. This frequency band was chosen based on the relatively high background noise level in that range. Hour-long auto- and cross-correlations were computed and the Green's functions stacked by day and total time. To track the temporal velocity changes we stretched a 24 hour moving window of correlation functions from 90% to 110% of the original and cross correlated them with the total stack. All of the stations' auto-correlations detected what is interpreted as an increase in velocity in 2010, with an average increase of 0.13%. Cross-correlations from station V01, near the summit, to the other stations show comparable changes that are also interpreted as increases in velocity. We attribute this change to the closing of cracks in the subsurface due either to seasonal snow loading or regional tectonics. In addition to the common increase in velocity across the stations, there are excursions in velocity on the same order lasting several days. Amplitude decreases as the station's distance from the vent increases suggesting these excursions may be attributed to changes within the volcanic edifice. In at least two occurrences the amplitudes at stations V06 and V07, the stations farthest from the vent, are smaller. Similar short temporal excursions were seen in the auto-correlations from 2011, however, there was little to no increase in the overall velocity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Elevated plasma fibrinogen levels have prospectively been associated with an increased risk of coronary artery disease in different populations. Plasma fibrinogen is a measure of systemic inflammation crucially involved in atherosclerosis. The vagus nerve curtails inflammation via a cholinergic antiinflammatory pathway. We hypothesized that lower vagal control of the heart relates to higher plasma fibrinogen levels. METHODS: Study participants were 559 employees (age 17-63 years; 89% men) of an airplane manufacturing plant in southern Germany. All subjects underwent medical examination, blood sampling, and 24-hour ambulatory heart rate recording while kept on their work routine. The root mean square of successive differences in RR intervals during the night period (nighttime RMSSD) was computed as the heart rate variability index of vagal function. RESULTS: After controlling for demographic, lifestyle, and medical factors, nighttime RMSSD explained 1.7% (P = 0.001), 0.8% (P = 0.033), and 7.8% (P = 0.007), respectively, of the variance in fibrinogen levels in all subjects, men, and women. Nighttime RMSSD and fibrinogen levels were stronger correlated in women than in men. In all workers, men, and women, respectively, there was a mean +/- SEM increase of 0.41 +/- 0.13 mg/dL, 0.28 +/- 0.13 mg/dL, and 1.16 +/- 0.41 mg/dL fibrinogen for each millisecond decrease in nighttime RMSSD. CONCLUSIONS: Reduced vagal outflow to the heart correlated with elevated plasma fibrinogen levels independent of the established cardiovascular risk factors. This relationship seemed comparably stronger in women than men. Such an autonomic mechanism might contribute to the atherosclerotic process and its thrombotic complications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the relationship between social support and coagulation parameter reactivity to mental stress in men and to determine if norepinephrine is involved. Lower social support is associated with higher basal coagulation activity and greater norepinephrine stress reactivity, which in turn, is linked with hypercoagulability. However, it is not known if low social support interacts with stress to further increase coagulation reactivity or if norepinephrine affects this association. These findings may be important for determining if low social support influences thrombosis and possible acute coronary events in response to acute stress. We investigated the relationship between social support and coagulation parameter reactivity to mental stress in men and determined if norepinephrine is involved. METHODS: We measured perceived social support in 63 medication-free nonsmoking men (age (mean +/- standard error of the mean) = 36.7 +/- 1.7 years) who underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We measured plasma D-dimer, fibrinogen, clotting Factor VII activity (FVII:C), and plasma norepinephrine at rest as well as immediately after stress and 20 minutes after stress. RESULTS: Independent of body mass index, mean arterial pressure, and age, lower social support was associated with higher D-dimer and fibrinogen levels at baseline (p < .012) and with greater increases in fibrinogen (beta = -0.36, p = .001; DeltaR(2) = .12), and D-dimer (beta = -0.21, p = .017; DeltaR(2) = .04), but not in FVII:C (p = .83) from baseline to 20 minutes after stress. General linear models revealed significant main effects of social support and stress on fibrinogen, D-dimer, and norepinephrine (p < .035). Controlling for norepinephrine did not change the significance of the reported associations between social support and the coagulation measures D-dimer and fibrinogen. CONCLUSIONS: Our results suggest that lower social support is associated with greater coagulation activity before and after acute stress, which was unrelated to norepinephrine reactivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a vertically resolved zonal mean monthly mean global ozone data set spanning the period 1901 to 2007, called HISTOZ.1.0. It is based on a new approach that combines information from an ensemble of chemistry climate model (CCM) simulations with historical total column ozone information. The CCM simulations incorporate important external drivers of stratospheric chemistry and dynamics (in particular solar and volcanic effects, greenhouse gases and ozone depleting substances, sea surface temperatures, and the quasi-biennial oscillation). The historical total column ozone observations include ground-based measurements from the 1920s onward and satellite observations from 1970 to 1976. An off-line data assimilation approach is used to combine model simulations, observations, and information on the observation error. The period starting in 1979 was used for validation with existing ozone data sets and therefore only ground-based measurements were assimilated. Results demonstrate considerable skill from the CCM simulations alone. Assimilating observations provides additional skill for total column ozone. With respect to the vertical ozone distribution, assimilating observations increases on average the correlation with a reference data set, but does not decrease the mean squared error. Analyses of HISTOZ.1.0 with respect to the effects of El Niño–Southern Oscillation (ENSO) and of the 11 yr solar cycle on stratospheric ozone from 1934 to 1979 qualitatively confirm previous studies that focussed on the post-1979 period. The ENSO signature exhibits a much clearer imprint of a change in strength of the Brewer–Dobson circulation compared to the post-1979 period. The imprint of the 11 yr solar cycle is slightly weaker in the earlier period. Furthermore, the total column ozone increase from the 1950s to around 1970 at northern mid-latitudes is briefly discussed. Indications for contributions of a tropospheric ozone increase, greenhouse gases, and changes in atmospheric circulation are found. Finally, the paper points at several possible future improvements of HISTOZ.1.0.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A lack of quantitative high resolution paleoclimate data from the Southern Hemisphere limits the ability to examine current trends within the context of long-term natural climate variability. This study presents a temperature reconstruction for southern Tasmania based on analyses of a sediment core from Duckhole Lake (43.365°S, 146.875°E). The relationship between non-destructive whole core scanning reflectance spectroscopy measurements in the visible spectrum (380–730 nm) and the instrumental temperature record (ad 1911–2000) was used to develop a calibration-in-time reflectance spectroscopy-based temperature model. Results showed that a trough in reflectance from 650 to 700 nm, which represents chlorophyll and its derivatives, was significantly correlated to annual mean temperature. A calibration model was developed (R = 0.56, p auto < 0.05, root mean squared error of prediction (RMSEP) = 0.21°C, five-year filtered data, calibration period 1911–2000) and applied down-core to reconstruct annual mean temperatures in southern Tasmania over the last c. 950 years. This indicated that temperatures were initially cool c. ad 1050, but steadily increased until the late ad 1100s. After a brief cool period in the ad 1200s, temperatures again increased. Temperatures steadily decreased during the ad 1600s and remained relatively stable until the start of the 20th century when they rapidly decreased, before increasing from ad 1960s onwards. Comparisons with high resolution temperature records from western Tasmania, New Zealand and South America revealed some similarities, but also highlighted differences in temperature variability across the mid-latitudes of the Southern Hemisphere. These are likely due to a combination of factors including the spatial variability in climate between and within regions, and differences between records that document seasonal (i.e. warm season/late summer) versus annual temperature variability. This highlights the need for further records from the mid-latitudes of the Southern Hemisphere in order to constrain past natural spatial and seasonal/annual temperature variability in the region, and to accurately identify and attribute changes to natural variability and/or anthropogenic activities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High-resolution, well-calibrated records of lake sediments are critically important for quantitative climate reconstructions, but they remain a methodological and analytical challenge. While several comprehensive paleotemperature reconstructions have been developed across Europe, only a few quantitative high-resolution studies exist for precipitation. Here we present a calibration and verification study of lithoclastic sediment proxies from proglacial Lake Oeschinen (46°30′N, 7°44′E, 1,580 m a.s.l., north–west Swiss Alps) that are sensitive to rainfall for the period AD 1901–2008. We collected two sediment cores, one in 2007 and another in 2011. The sediments are characterized by two facies: (A) mm-laminated clastic varves and (B) turbidites. The annual character of the laminae couplets was confirmed by radiometric dating (210Pb, 137Cs) and independent flood-layer chronomarkers. Individual varves consist of a dark sand-size spring-summer layer enriched in siliciclastic minerals and a lighter clay-size calcite-rich winter layer. Three subtypes of varves are distinguished: Type I with a 1–1.5 mm fining upward sequence; Type II with a distinct fine-sand base up to 3 mm thick; and Type III containing multiple internal microlaminae caused by individual summer rainstorm deposits. Delta-fan surface samples and sediment trap data fingerprint different sediment source areas and transport processes from the watershed and confirm the instant response of sediment flux to rainfall and erosion. Based on a highly accurate, precise and reproducible chronology, we demonstrate that sediment accumulation (varve thickness) is a quantitative predictor for cumulative boreal alpine spring (May–June) and spring/summer (May–August) rainfall (rMJ = 0.71, rMJJA = 0.60, p < 0.01). Bootstrap-based verification of the calibration model reveals a root mean squared error of prediction (RMSEPMJ = 32.7 mm, RMSEPMJJA = 57.8 mm) which is on the order of 10–13 % of mean MJ and MJJA cumulative precipitation, respectively. These results highlight the potential of the Lake Oeschinen sediments for high-resolution reconstructions of past rainfall conditions in the northern Swiss Alps, central and eastern France and south-west Germany.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE    Segmentation of the proximal femur in digital antero-posterior (AP) pelvic radiographs is required to create a three-dimensional model of the hip joint for use in planning and treatment. However, manually extracting the femoral contour is tedious and prone to subjective bias, while automatic segmentation must accommodate poor image quality, anatomical structure overlap, and femur deformity. A new method was developed for femur segmentation in AP pelvic radiographs. METHODS    Using manual annotations on 100 AP pelvic radiographs, a statistical shape model (SSM) and a statistical appearance model (SAM) of the femur contour were constructed. The SSM and SAM were used to segment new AP pelvic radiographs with a three-stage approach. At initialization, the mean SSM model is coarsely registered to the femur in the AP radiograph through a scaled rigid registration. Mahalanobis distance defined on the SAM is employed as the search criteria for each annotated suggested landmark location. Dynamic programming was used to eliminate ambiguities. After all landmarks are assigned, a regularized non-rigid registration method deforms the current mean shape of SSM to produce a new segmentation of proximal femur. The second and third stages are iteratively executed to convergence. RESULTS    A set of 100 clinical AP pelvic radiographs (not used for training) were evaluated. The mean segmentation error was [Formula: see text], requiring [Formula: see text] s per case when implemented with Matlab. The influence of the initialization on segmentation results was tested by six clinicians, demonstrating no significance difference. CONCLUSIONS    A fast, robust and accurate method for femur segmentation in digital AP pelvic radiographs was developed by combining SSM and SAM with dynamic programming. This method can be extended to segmentation of other bony structures such as the pelvis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to characterize the nociceptive withdrawal reflex (NWR) and to define the nociceptive threshold in 25 healthy, non-medicated experimental sheep in standing posture. Electrical stimulation of the dorsal lateral digital nerves of the right thoracic and the pelvic limb was performed and surface-electromyography (EMG) from the deltoid (all animals) and the femoral biceps (18 animals) or the peroneus tertius muscles (7 animals) was recorded. The behavioural reaction following each stimulation was scored on a scale from 0 (no reaction) to 5 (strong whole body reaction). A train-of-five 1 ms constant-current pulse was used and current intensity was stepwise increased until NWR threshold intensity was reached. The NWR threshold intensity (It) was defined as the minimal stimulus intensity able to evoke a reflex with a minimal Root-Mean-Square amplitude (RMSA) of 20 μV, a minimal duration of 10 ms and a minimal reaction score of 1 (slight muscle contraction of the stimulated limb) within the time window of 20 to 130 ms post-stimulation. Based on this value, further stimulations were performed below (0.9It) and above threshold (1.5It and 2It). The stimulus-response curve was described. Data are reported as medians and interquartile ranges. At the deltoid muscle It was 4.4 mA (2.9–5.7) with an RMSA of 62 μV (30–102). At the biceps femoris muscle It was 7.0 mA (4.0–10.0) with an RMSA of 43 μV (34–50) and at the peroneus tertius muscle It was 3.4 mA (3.1–4.4) with an RMSA of 38 μV (32–46). Above threshold, RMSA was significantly increased at all muscles. Below threshold, RMSA was only significantly smaller than at It for the peroneus tertius muscle but not for the other muscles.