946 resultados para Mean square analysis
Resumo:
Nel primo capitolo viene introdotto lo studio eff�ettuato e descritto un metodo di misure successivo alla caratterizzazione della super�ficie. Nel secondo capitolo vengono descritti i campioni analizzati e, nello speci�fico, la crescita attraverso MaCE dei nanofi�li di silicio. Nel terzo capitolo viene descritto lo strumento AFM utilizzato e la teoria della caratterizzazione alla base dello studio condotto. Nella quarta sezione vengono descritti i risultati ottenuti mentre nelle conclusioni viene tratto il risultato dei valori ottenuti di RMS roughness e roughness exponent.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
Topologische Beschränkungen beeinflussen die Eigenschaften von Polymeren. Im Rahmen dieser Arbeit wird mit Hilfe von Computersimulationen im Detail untersucht, inwieweit sich die statischen Eigenschaften von kollabierten Polymerringen, Polymerringen in konzentrierten Lösungen und aus Polymerringen aufgebauten Bürsten mit topologischen Beschränkungen von solchen ohne topologische Beschränkungen unterscheiden. Des Weiteren wird analysiert, welchen Einfluss geometrische Beschränkungen auf die topologischen Eigenschaften von einzelnen Polymerketten besitzen. Im ersten Teil der Arbeit geht es um den Einfluss der Topologie auf die Eigenschaften einzelner Polymerketten in verschiedenen Situationen. Da allerdings gerade die effiziente Durchführung von Monte-Carlo-Simulationen von kollabierten Polymerketten eine große Herausforderung darstellt, werden zunächst drei Bridging-Monte-Carlo-Schritte für Gitter- auf Kontinuumsmodelle übertragen. Eine Messung der Effizienz dieser Schritte ergibt einen Beschleunigungsfaktor von bis zu 100 im Vergleich zum herkömmlichen Slithering-Snake-Algorithmus. Darauf folgt die Analyse einer einzelnen, vergröberten Polystyrolkette in sphärischer Geometrie hinsichtlich Verschlaufungen und Knoten. Es wird gezeigt, dass eine signifikante Verknotung der Polystrolkette erst eintritt, wenn der Radius des umgebenden Kapsids kleiner als der Gyrationsradius der Kette ist. Des Weiteren werden sowohl Monte-Carlo- als auch Molekulardynamiksimulationen sehr großer Ringe mit bis zu einer Million Monomeren im kollabierten Zustand durchgeführt. Während die Konfigurationen aus den Monte-Carlo-Simulationen aufgrund der Verwendung der Bridging-Schritte sehr stark verknotet sind, bleiben die Konfigurationen aus den Molekulardynamiksimulationen unverknotet. Hierbei zeigen sich signifikante Unterschiede sowohl in der lokalen als auch in der globalen Struktur der Ringpolymere. Im zweiten Teil der Arbeit wird das Skalierungsverhalten des Gyrationsradius der einzelnen Polymerringe in einer konzentrierten Lösung aus völlig flexiblen Polymerringen im Kontinuum untersucht. Dabei wird der Anfang des asymptotischen Skalierungsverhaltens, welches mit dem Modell des “fractal globules“ konsistent ist, erreicht. Im abschließenden, dritten Teil dieser Arbeit wird das Verhalten von Bürsten aus linearen Polymeren mit dem von Ringpolymerbürsten verglichen. Dabei zeigt sich, dass die Struktur und das Skalierungsverhalten beider Systeme mit identischem Dichteprofil parallel zum Substrat deutlich voneinander abweichen, obwohl die Eigenschaften beider Systeme in Richtung senkrecht zum Substrat übereinstimmen. Der Vergleich des Relaxationsverhaltens einzelner Ketten in herkömmlichen Polymerbürsten und Ringbürsten liefert keine gravierenden Unterschiede. Es stellt sich aber auch heraus, dass die bisher verwendeten Erklärungen zur Relaxationsverhalten von herkömmlichen Bürsten nicht ausreichen, da diese lediglich den anfänglichen Zerfall der Korrelationsfunktion berücksichtigen. Bei der Untersuchung der Dynamik einzelner Monomere in einer herkömmlichen Bürste aus offenen Ketten vom Substrat hin zum offenen Ende zeigt sich, dass die Monomere in der Mitte der Kette die langsamste Relaxation besitzen, obwohl ihre mittlere Verrückung deutlich kleiner als die der freien Endmonomere ist.
Resumo:
Die Funktionsweise der Laserionenquellenfalle LIST, sowie deren Implementierung bei der Forschungseinrichtung ISOLDE am CERN als neue Standard-Ionenquelle und die ermittelten Spezifikationen Effizienz und Selektivität werden vorgestellt.rnrnDurch die Implementierung der LIST bei ISOLDE konnte on-line mit Hilfe von Radionukliden ein Minimalwert zur Unterdrückung von Kontaminationen durch die LIST bestimmt werden. Die erfolgreiche Unterdrückung von Francium-Kontamination ermöglichte es, neue Messdaten für den mittleren Ladungsradius und die Hyperfeinstruktur für Po-217 zu erzeugen.rnrnUm die Funktionalität der LIST bei ISOLDE hinsichtlich der Ionisationseffizienz gegenüber anderen Ionenquellen einzuordnen, wurden in Mainz am RISIKO-Massenseparator mit der bereits existierenden Standard-Ionenquelle RILIS und der LIST die Effizienzen bestimmt und miteinander verglichen. Es konnte gezeigt werden, dass die LIST im Modus hoher Ionisationseffizienz eine vergleichbare Effizienz aufweist wie die RILIS. Im Modus zur Produktion eines hochreinen Ionenstrahls ist die Ionisationseffizienz gegenüber der RILIS reduziert.rnrnDa die Bestimmung der Selektivität im On-line-Betrieb aufwendig und zeitintensiv ist, wurde die Reinheit des Ionenstrahls am RISIKO-Massenseparator mittels Laufzeitmessungen der Ionen off-line bestimmt und analysiert. Die Zeitstrukturen der RILIS ermöglichen einerseits Rückschlüsse auf die Reinheit des Ionenstrahls zu ziehen, andererseits konnte auch die Ionisation außerhalb des Atomisators, in dem überwiegend die resonante Ionisation stattfindet, nachgewiesen werden. Durch diesen Nachweis kann der Effizienzverlust während der Produktion eines hochreinen Ionenstrahls erklärt werden. Zudem bietet er einen Ansatz für weitere Entwicklungsarbeiten der LIST zur Steigerung der Effizienz.rnrnEine Übertragung der Messergebnisse zur Zeitstruktur der RILIS auf die LIST eröffnet neue Möglichkeiten zur Steigerung deren Selektivität im massenselektiven Mode. Dieser wurde anhand von Simulationen überprüft und mit Messungen an Kalium experimentell quantifiziert.
Resumo:
A new overground body-weight support system called ZeroG has been developed that allows patients with severe gait impairments to practice gait and balance activities in a safe, controlled manner. The unloading system is capable of providing up to 300 lb of static support and 150 lb of dynamic (or constant force) support using a custom-series elastic actuator. The unloading system is mounted to a driven trolley, which rides along an overhead rail. We evaluated the performance of ZeroG's unloading system, as well as the trolley tracking system, using benchtop and human-subject testing. Average root-mean-square and peak errors in unloading were 2.2 and 7.2 percent, respectively, over the range of forces tested while trolley tracking errors were less than 3 degrees, indicating the system was able to maintain its position above the subject. We believe training with ZeroG will allow patients to practice activities that are critical to achieving functional independence at home and in the community.
Resumo:
The CBS-QB3 method was used to calculate the gas-phase free energy difference between 20 phenols and their respective anions, and the CPCM continuum solvation method was applied to calculate the free energy differences of solvation for the phenols and their anions. The CPCM solvation calculations were performed on both gas-phase and solvent-phase optimized structures. Absolute pKa calculations with solvated phase optimized structures for the CPCM calculations yielded standard deviations and root-mean-square errors of less than 0.4 pKa unit. This study is the most accurate absolute determination of the pKa values of phenols, and is among the most accurate of any such calculations for any group of compounds. The ability to make accurate predictions of pKa values using a coherent, well-defined approach, without external approximations or fitting to experimental data, is of general importance to the chemical community. The solvated phase optimized structures of the anions are absolutely critical to obtain this level of accuracy, and yield a more realistic charge separation between the negatively charged oxygen and the ring system of the phenoxide anions.
Resumo:
The G3, CBS-QB3, and CBS-APNO methods have been used to calculate ΔH and ΔG values for deprotonation of seventeen gas-phase reactions where the experimental values are reported to be accurate within one kcal/mol. For these reactions, the mean absolute deviation of these three methods from experiment is 0.84 to 1.26 kcal/mol, and the root-mean-square deviation for ΔG and ΔH is 1.43 and 1.49 kcal/mol for the CBS-QB3 method, 1.06 and 1.14 kcal/mol for the CBS-APNO method, and 1.16 and 1.28 for the G3 method. The high accuracy of these methods makes them reliable for calculating gas-phase deprotonation reactions, and allows them to serve as a valuable check on the accuracy of experimental data reported in the National Institutes of Standards and Technology database.
Resumo:
The Advanced Very High Resolution Radiometer (AVHRR) carried on board the National Oceanic and Atmospheric Administration (NOAA) and the Meteorological Operational Satellite (MetOp) polar orbiting satellites is the only instrument offering more than 25 years of satellite data to analyse aerosols on a daily basis. The present study assessed a modified AVHRR aerosol optical depth τa retrieval over land for Europe. The algorithm might also be applied to other parts of the world with similar surface characteristics like Europe, only the aerosol properties would have to be adapted to a new region. The initial approach used a relationship between Sun photometer measurements from the Aerosol Robotic Network (AERONET) and the satellite data to post-process the retrieved τa. Herein a quasi-stand-alone procedure, which is more suitable for the pre-AERONET era, is presented. In addition, the estimation of surface reflectance, the aerosol model, and other processing steps have been adapted. The method's cross-platform applicability was tested by validating τa from NOAA-17 and NOAA-18 AVHRR at 15 AERONET sites in Central Europe (40.5° N–50° N, 0° E–17° E) from August 2005 to December 2007. Furthermore, the accuracy of the AVHRR retrieval was related to products from two newer instruments, the Medium Resolution Imaging Spectrometer (MERIS) on board the Environmental Satellite (ENVISAT) and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board Aqua/Terra. Considering the linear correlation coefficient R, the AVHRR results were similar to those of MERIS with even lower root mean square error RMSE. Not surprisingly, MODIS, with its high spectral coverage, gave the highest R and lowest RMSE. Regarding monthly averaged τa, the results were ambiguous. Focusing on small-scale structures, R was reduced for all sensors, whereas the RMSE solely for MERIS substantially increased. Regarding larger areas like Central Europe, the error statistics were similar to the individual match-ups. This was mainly explained with sampling issues. With the successful validation of AVHRR we are now able to concentrate on our large data archive dating back to 1985. This is a unique opportunity for both climate and air pollution studies over land surfaces.
Resumo:
The purpose of this study is to examine the role of vocational rehabilitation services in contributing to the goals of the National HIV/AIDS strategy. Three key research questions are addressed: (a) What is the relationship among factors associated with the use of vocational rehabilitation services for people living with HIV/AIDS? (b) Are the factors associated with use of vocational rehabilitation also associated with access to health care, supplemental employment services and reduced risk of HIV transmission? And (c) What unique role does use of vocational rehabilitation services play in access to health care and HIV prevention? Survey research methods were used to collect data from a broad sample of volunteer respondents who represented diverse racial (37% Black, 37% White, 18% Latino, 7% other), gender (65% male, 34% female, 1% transgender) and sexual orientation (48% heterosexual, 44% gay, 8% bisexual) backgrounds. The fit of the final structural equation model was good (root mean square error of approximation = .055, Comparative Fit Index=.953, Tucker Lewis Index=.945). Standardized effects with bootstrap confidence intervals are reported. Overall, the findings support the hypothesis that vocational rehabilitation services can play an important role in health and prevention strategies outlined in the National HIV/AIDS strategy.
Resumo:
A number of state-level pharmaceutical assistance programs have been established as a result of the growing recognition of the role of pharmaceuticals in the long-term care of the elderly. However, existing research does not provide a coherent expectation for patterns of use by rural and urban elderly. The data for this analysis are drawn from a larger study of the Pennsylvania Pharmaceutical Assistance Contract for the Elderly (PACE). PACE provides prescription medicines for elderly who meet income requirements. The research project was designed to assess the characteristics of PACE program participants and non-participants on a wide range of issues. Chi-square analysis and regression models were used to assess the association between rural and urban residence and access to the PACE Program. The results indicate that rural/urban status of the elderly is not a significant predictor of the use of PACE. Other traditional variables (e.g., health self-rating and physician visits) did predict difference in the pattern of use.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
This work is conducted to study the complications associated with the sonic log prediction in carbonate logs and to investigate the possible solutions to accurately predict the sonic logs in Traverse Limestone. Well logs from fifty different wells were analyzed to define the mineralogy of the Traverse Limestone by using conventional 4-mineral and 3-mineral identification approaches. We modified the conventional 3-mineral identification approach (that completely neglects the gamma ray response) to correct the shale effects on the basis of gamma ray log before employing the 3-mineral identification. This modification helped to get the meaningful insight of the data when a plot was made between DGA (dry grain density) and UMA (Photoelectric Volumetric Cross-section) with the characteristic ternary diagram of the quartz, calcite and dolomite. The results were then compared with the 4-mineral identification approach. Contour maps of the average mineral fractions present in the Traverse Limestone were prepared to see the basin wide mineralogy of Traverse Limestone. In the second part, sonic response of Traverse Limestone was predicted in fifty randomly distributed wells. We used the modified time average equation that accounts for the shale effects on the basis of gamma ray log, and used it to predict the sonic behavior from density porosity and average porosity. To account for the secondary porosity of dolomite, we subtracted the dolomitic fraction of clean porosity from the total porosity. The pseudo-sonic logs were then compared with the measured sonic logs on the root mean square (RMS) basis. Addition of dolomite correction in modified time average equation improved the results of sonic prediction from neutron porosity and average porosity. The results demonstrated that sonic logs could be predicted in carbonate rocks with a root mean square error of about 4μsec/ft. We also attempted the use of individual mineral components for sonic log prediction but the ambiguities in mineral fractions and in the sonic properties of the minerals limited the accuracy of the results.
Resumo:
The capability to detect combustion in a diesel engine has the potential of being an important control feature to meet increasingly stringent emission regulations, develop alternative combustion strategies, and use of biofuels. In this dissertation, block mounted accelerometers were investigated as potential feedback sensors for detecting combustion characteristics in a high-speed, high pressure common rail (HPCR), 1.9L diesel engine. Accelerometers were positioned in multiple placements and orientations on the engine, and engine testing was conducted under motored, single and pilot-main injection conditions. Engine tests were conducted at varying injection timings, engine loads, and engine speeds to observe the resulting time and frequency domain changes of the cylinder pressure and accelerometer signals. The frequency content of the cylinder pressure based signals and the accelerometer signals between 0.5 kHz and 6 kHz indicated a strong correlation with coherence values of nearly 1. The accelerometers were used to produce estimated combustion signals using the Frequency Response Functions (FRF) measured from the frequency domain characteristics of the cylinder pressure signals and the response of the accelerometers attached to the engine block. When compared to the actual combustion signals, the estimated combustion signals produced from the accelerometer response had Root Mean Square Errors (RMSE) between 7% and 25% of the actual signals peak value. Weighting the FRF’s from multiple test conditions along their frequency axis with the coherent output power reduced the median RMSE of the estimated combustion signals and the 95th percentile of RMSE produced from each test condition. The RMSE’s of the magnitude based combustion metrics including peak cylinder pressure, MPG, peak ROHR, and work estimated from the combustion signals produced by the accelerometer responses were between 15% and 50% of their actual value. The MPG measured from the estimated pressure gradient shared a direct relationship to the actual MPG. The location based combustion metrics such as the location of peak values and burn durations were capable of RMSE measurements as low as 0.9°. Overall, accelerometer based combustion sensing system was capable of detecting combustion and providing feedback regarding the in cylinder combustion process
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
We used the Green's functions from auto-correlations and cross-correlations of seismic ambient noise to monitor temporal velocity changes in the subsurface at Villarrica volcano in the Southern Andes of Chile. Campaigns were conducted from March to October 2010 and February to April 2011 with 8 broadband and 6 short-period stations, respectively. We prepared the data by removing the instrument response, normalizing with a root-mean-square method, whitening the spectra, and filtering from 1 to 10 Hz. This frequency band was chosen based on the relatively high background noise level in that range. Hour-long auto- and cross-correlations were computed and the Green's functions stacked by day and total time. To track the temporal velocity changes we stretched a 24 hour moving window of correlation functions from 90% to 110% of the original and cross correlated them with the total stack. All of the stations' auto-correlations detected what is interpreted as an increase in velocity in 2010, with an average increase of 0.13%. Cross-correlations from station V01, near the summit, to the other stations show comparable changes that are also interpreted as increases in velocity. We attribute this change to the closing of cracks in the subsurface due either to seasonal snow loading or regional tectonics. In addition to the common increase in velocity across the stations, there are excursions in velocity on the same order lasting several days. Amplitude decreases as the station's distance from the vent increases suggesting these excursions may be attributed to changes within the volcanic edifice. In at least two occurrences the amplitudes at stations V06 and V07, the stations farthest from the vent, are smaller. Similar short temporal excursions were seen in the auto-correlations from 2011, however, there was little to no increase in the overall velocity.