946 resultados para Mean square analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Based on theoretical considerations an explanation for the temperature dependence of the thermal expansion and the bulk modulus is proposed. A new equation state is also derived. Additionally a physical explanation for the latent heat of fusion is presented. These theoretical predictions are tested against experiments on highly symmetrical monatomic structures. ^ The volume is not an independent variable and must be broken down into its fundamental components when the relationships to the pressure and temperature are defined. Using zero pressure and temperature reference frame, the initial parameters, volume at zero pressure and temperature[V°], bulk modulus at zero temperature [K°] and volume coefficient of thermal expansion at zero pressure[α°] are defined. ^ The new derived EoS is tested against the experiments on perovskite and epsilon iron. The Root-mean-square-deviations (RMSD) of the residuals of the molar volume, pressure, and temperature are in the range of the uncertainty of the experiments. ^ Separating the experiments into 200 K ranges, the new EoS was compared to the most widely used finite strain, interatomic potential, and empirical isothermal EoSs such as the Burch-Murnaghan, the Vinet, and the Roy-Roy respectively. Correlation coefficients, RMSD's of the residuals, and Akaike Information Criteria were used for evaluating the fitting. Based on these fitting parameters, the new p-V-T EoS is superior in every temperature range relative to the investigated conventional isothermal EoS. ^ The new EoS for epsilon iron reproduces the preliminary-reference earth-model (PREM) densities at 6100-7400 K indicating that the presence of light elements might not be necessary to explain the Earth's inner core densities. ^ It is suggested that the latent heat of fusion supplies the energy required for overcoming on the viscous drag resistance of the atoms. The calculated energies for melts formed from highly symmetrical packing arrangements correlate very well with experimentally determined latent heat values. ^ The optical investigation of carhonado-diamond is also part of the dissertation. The collected first complete infrared FTIR absorption spectra for carhonado-diamond confirm the interstellar origin for the most enigmatic diamonds known as carbonado. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation aims to improve the performance of existing assignment-based dynamic origin-destination (O-D) matrix estimation models to successfully apply Intelligent Transportation Systems (ITS) strategies for the purposes of traffic congestion relief and dynamic traffic assignment (DTA) in transportation network modeling. The methodology framework has two advantages over the existing assignment-based dynamic O-D matrix estimation models. First, it combines an initial O-D estimation model into the estimation process to provide a high confidence level of initial input for the dynamic O-D estimation model, which has the potential to improve the final estimation results and reduce the associated computation time. Second, the proposed methodology framework can automatically convert traffic volume deviation to traffic density deviation in the objective function under congested traffic conditions. Traffic density is a better indicator for traffic demand than traffic volume under congested traffic condition, thus the conversion can contribute to improving the estimation performance. The proposed method indicates a better performance than a typical assignment-based estimation model (Zhou et al., 2003) in several case studies. In the case study for I-95 in Miami-Dade County, Florida, the proposed method produces a good result in seven iterations, with a root mean square percentage error (RMSPE) of 0.010 for traffic volume and a RMSPE of 0.283 for speed. In contrast, Zhou's model requires 50 iterations to obtain a RMSPE of 0.023 for volume and a RMSPE of 0.285 for speed. In the case study for Jacksonville, Florida, the proposed method reaches a convergent solution in 16 iterations with a RMSPE of 0.045 for volume and a RMSPE of 0.110 for speed, while Zhou's model needs 10 iterations to obtain the best solution, with a RMSPE of 0.168 for volume and a RMSPE of 0.179 for speed. The successful application of the proposed methodology framework to real road networks demonstrates its ability to provide results both with satisfactory accuracy and within a reasonable time, thus establishing its potential usefulness to support dynamic traffic assignment modeling, ITS systems, and other strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interferometric synthetic aperture radar (InSAR) techniques can successfully detect phase variations related to the water level changes in wetlands and produce spatially detailed high-resolution maps of water level changes. Despite the vast details, the usefulness of the wetland InSAR observations is rather limited, because hydrologists and water resources managers need information on absolute water level values and not on relative water level changes. We present an InSAR technique called Small Temporal Baseline Subset (STBAS) for monitoring absolute water level time series using radar interferograms acquired successively over wetlands. The method uses stage (water level) observation for calibrating the relative InSAR observations and tying them to the stage's vertical datum. We tested the STBAS technique with two-year long Radarsat-1 data acquired during 2006–2008 over the Water Conservation Area 1 (WCA1) in the Everglades wetlands, south Florida (USA). The InSAR-derived water level data were calibrated using 13 stage stations located in the study area to generate 28 successive high spatial resolution maps (50 m pixel resolution) of absolute water levels. We evaluate the quality of the STBAS technique using a root mean square error (RMSE) criterion of the difference between InSAR observations and stage measurements. The average RMSE is 6.6 cm, which provides an uncertainty estimation of the STBAS technique to monitor absolute water levels. About half of the uncertainties are attributed to the accuracy of the InSAR technique to detect relative water levels. The other half reflects uncertainties derived from tying the relative levels to the stage stations' datum.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Florida Bay is a highly dynamic estuary that exhibits wide natural fluctuations in salinity due to changes in the balance of precipitation, evaporation and freshwater runoff from the mainland. Rapid and large-scale modification of freshwater flow and construction of transportation conduits throughout the Florida Keys during the late nineteenth and twentieth centuries reshaped water circulation and salinity patterns across the ecosystem. In order to determine long-term patterns in salinity variation across the Florida Bay estuary, we used a diatom-based salinity transfer function to infer salinity within 3.27 ppt root mean square error of prediction from diatom assemblages from four ~130 year old sediment records. Sites were distributed along a gradient of exposure to anthropogenic shifts in the watershed and salinity. Precipitation was found to be the primary driver influencing salinity fluctuations over the entire record, but watershed modifications on the mainland and in the Florida Keys during the late-1800s and 1900s were the most likely cause of significant shifts in baseline salinity. The timing of these shifts in the salinity baseline varies across the Bay: that of the northeastern coring location coincides with the construction of the Florida Overseas Railway (AD 1906–1916), while that of the east-central coring location coincides with the drainage of Lake Okeechobee (AD 1881–1894). Subsequent decreases occurring after the 1960s (east-central region) and early 1980s (southwestern region) correspond to increases in freshwater delivered through water control structures in the 1950s–1970s and again in the 1980s. Concomitant increases in salinity in the northeastern and south-central regions of the Bay in the mid-1960s correspond to an extensive drought period and the occurrence of three major hurricanes, while the drop in the early 1970s could not be related to any natural event. This paper provides information about major factors influencing salinity conditions in Florida Bay in the past and quantitative estimates of the pre- and post-South Florida watershed modification salinity levels in different regions of the Bay. This information should be useful for environmental managers in setting restoration goals for the marine ecosystems in South Florida, especially for Florida Bay.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this study was to explore the impact of the Florida State-mandated Basic Skills Exit Tests (BSET) on the effectiveness of remedial instruction programs to adequately serve the academically underprepared student population. The primary research question concerned whether the introduction of the BSET has resulted in remedial completers who are better prepared for college-level coursework. ^ This study consisted of an ex post facto research design to examine the impact of the BSET on student readiness for subsequent college-level coursework at Miami-Dade Community College. Two way analysis of variance was used to compare the performance of remedial and college-ready students before and after the introduction of the BSET requirement. Chi-square analysis was used to explore changes in the proportion of students completing and passing remedial courses. Finally, correlation analysis was used to explore the utility of the BSET in predicting subsequent college-level course performance. Differences based on subject area and race/ethnicity were explored. ^ The introduction of the BSET did not improve the performance of remedial completers in subsequent college-level courses in any of the subject areas. The BSET did have a negative impact on the success rate of students in remedial reading and mathematics courses. There was a significant decrease in minority students' likelihood of passing remedial reading and mathematics courses after the BSET was introduced. The reliability of the BSET is unacceptably low for all subject areas, based on estimates derived from administrations at M-DCC. Nevertheless, there was a significant positive relationship between BSET score and grade point average in subsequent college-level courses. This relationship varied by subject area and ethnicity, with the BSET reading score having no relationship with subsequent course performance for Black non-Hispanic students. ^ The BSET had no discernable positive effect on remedial student performance in subsequent college-level courses. In other words, the BSET has not enhanced the effectiveness of the remedial programs to prepare students for later coursework at M-DCC. The BSET had a negative impact on the progress and success of students in remedial reading and mathematics. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examined the association of theoretically guided and empirically identified psychosocial variables on the co-occurrence of risky sexual behavior with alcohol consumption among university students. The study utilized event analysis to determine whether risky sex occurred during the same event in which alcohol was consumed. Relevant conceptualizations included alcohol disinhibition, self-efficacy, and social network theories. Predictor variables included negative condom attitudes, general risk taking, drinking motives, mistrust, social group membership, and gender. Factor analysis was employed to identify dimensions of drinking motives. Measured risky sex behaviors were (a) sex without a condom, (b) sex with people not known very well, (c) sex with injecting drug users (IDUs), (d) sex with people without knowing whether they had a STD, and (e) sex with using drugs. A purposive sample was used and included 222 male and female students recruited from a major urban university. Chi-square analysis was used to determine whether participants were more likely to engage in risky sex behavior in different alcohol use contexts. These contexts were only when drinking, only when not drinking, and when drinking or not. The chi-square findings did not support the hypothesis that university students who use alcohol with sex will engage in riskier sex. These results added to the literature by extending other similar findings to a university student sample. For each of the observed risky sex behaviors, discriminant analysis methodology was used to determine whether the predictor variables would differentiate the drinking contexts, or whether the behavior occurred. Results from discriminant analyses indicated that sex with people not known very well was the only behavior for which there were significant discriminant functions. Gender and enhancement drinking motives were important constructs in the classification model. Limitations of the study and implications for future research, social work practice and policy are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes the development of an adaptive control algorithm for Computerized Numerical Control (CNC) machines implemented in a multi-axis motion control board based on the TMS320C31 DSP chip. The adaptive process involves two stages: Plant Modeling and Inverse Control Application. The first stage builds a non-recursive model of the CNC system (plant) using the Least-Mean-Square (LMS) algorithm. The second stage consists of the definition of a recursive structure (the controller) that implements an inverse model of the plant by using the coefficients of the model in an algorithm called Forward-Time Calculation (FTC). In this way, when the inverse controller is implemented in series with the plant, it will pre-compensate for the modification that the original plant introduces in the input signal. The performance of this solution was verified at three different levels: Software simulation, implementation in a set of isolated motor-encoder pairs and implementation in a real CNC machine. The use of the adaptive inverse controller effectively improved the step response of the system in all three levels. In the simulation, an ideal response was obtained. In the motor-encoder test, the rise time was reduced by as much as 80%, without overshoot, in some cases. Even with the larger mass of the actual CNC machine, decrease of the rise time and elimination of the overshoot were obtained in most cases. These results lead to the conclusion that the adaptive inverse controller is a viable approach to position control in CNC machinery.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this study was to explore the impact of the Florida State-mandated Basic Skills Exit Tests (BSET) on the effectiveness of remedial instruction programs to adequately serve the academically underprepared student population. The primary research question concerned whether the introduction of the BSET has resulted in remedial completers who are better prepared for college-level coursework. This study consisted of an ex post facto research design to examine the impact of the BSET on student readiness for subsequent college-level coursework at Miami- Dade Community College. Two way analysis of variance was used to compare the performance of remedial and college-ready students before and after the introduction of the BSET requirement. Chi-square analysis was used to explore changes in the proportion of students completing and passing remedial courses. Finally, correlation analysis was used to explore the utility of the BSET in predicting subsequent college-level course performance. Differences based on subject area and race/ethnicity were explored.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an improved database of planktonic foraminiferal census counts from the Southern Hemisphere Oceans (SHO) from 15°S to 64°S. The SHO database combines 3 existing databases. Using this SHO database, we investigated dissolution biases that might affect faunal census counts. We suggest a depth/[DCO3]2- threshold of ~3800 m/[DCO3]2- = ~-10 to -5 µmol/kg for the Pacific and Indian Oceans, and ~4000 m/[DCO3]2- = ~0 to 10 µmol/kg for the Atlantic Ocean, under which core-top assemblages can be affected by dissolution and are less reliable for paleo-sea surface temperature (SST) reconstructions. We removed all core-tops beyond these thresholds from the SHO database. This database has 598 core-tops and is able to reconstruct past SST variations from 2° to 25.5°C, with a root mean square error of 1.00°C, for annual temperatures. To inspect dissolution affects SST reconstruction quality, we tested the data base with two "leave-one-out" tests, with and without the deep core-tops. We used this database to reconstruct Summer SST (SSST) over the last 20 ka, using the Modern Analog Technique method, on the Southeast Pacific core MD07-3100. This was compared to the SSST reconstructed using the 3 databases used to compile the SHO database. Thus showing that the reconstruction using the SHO database is more reliable, as its dissimilarity values are the lowest. The most important aspect here is the importance of a bias-free, geographic-rich, database. We leave this dataset open-ended to future additions; the new core-tops must be carefully selected, with their chronological frameworks, and evidence of dissolution assessed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The inspiratory muscle training (IMT) has been considered an option in reversing or preventing decrease in respiratory muscle strength, however, little is known about the adaptations of these muscles arising from the training with charge. Objectives: To investigate the effect of IMT on the diaphragmatic muscle strength and function neural and structural adjustment of diaphragm in sedentary young people, compare the effects of low intensity IMT with moderate intensity IMT on the thickness, mobility and electrical activity of diaphragm and in inspiratory muscles strength and establish a protocol for conducting a systematic review to evaluate the effects of respiratory muscle training in children and adults with neuromuscular diseases. Materials and Methods: A randomized, double-blind, parallel-group, controlled trial, sample of 28 healthy, both sexes, and sedentary young people, divided into two groups: 14 in the low load training group (G10%) and 14 in the moderate load training group (G55%). The volunteers performed for 9 weeks a home IMT protocol with POWERbreathe®. The G55% trained with 55% of maximal inspiratory pressure (MIP) and the G10% used a charge of 10% of MIP. The training was conducted in sessions of 30 repetitions, twice a day, six days per week. Every two weeks was evaluated MIP and adjusted the load. Volunteers were submitted by ultrasound, surface electromyography, spirometry and manometer before and after IMT. Data were analyzed by SPSS 20.0. Were performed Student's t-test for paired samples to compare diaphragmatic thickness, MIP and MEP before and after IMT protocol and Wilcoxon to compare the RMS (root mean square) and median frequency (MedF) values also before and after training protocol. They were then performed the Student t test for independent samples to compare mobility and diaphragm thickness, MIP and MEP between two groups and the Mann-Whitney test to compare the RMS and MedF values also between the two groups. Parallel to experimental study, we developed a protocol with support from the Cochrane Collaboration on IMT in people with neuromuscular diseases. Results: There was, in both groups, increased inspiratory muscle strength (P <0.05) and expiratory in G10% (P = 0.009) increase in RMS and thickness of relaxed muscle in G55% (P = 0.005; P = 0.026) and there was no change in the MedF (P> 0.05). The comparison between two groups showed a difference in RMS (P = 0.04) and no difference in diaphragm thickness and diaphragm mobility and respiratory muscle strength. Conclusions: It was identified increased neural activity and diagrammatic structure with consequent increase in respiratory muscle strength after the IMT with moderate load. IMT with load of 10% of MIP cannot be considered as a placebo dose, it increases the inspiratory muscle strength and IMT with moderate intensity is able to enhance the recruitment of muscle fibers of diaphragm and promote their hypertrophy. The protocol for carrying out the systematic review published in The Cochrane Library.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis begins by studying the thickness of evaporative spin coated colloidal crystals and demonstrates the variation of the thickness as a function of suspension concentration and spin rate. Particularly, the films are thicker with higher suspension concentration and lower spin rate. This study also provides evidence for the reproducibility of spin coating in terms of the thickness of the resulting colloidal films. These colloidal films, as well as the ones obtained from various other methods such as convective assembly and dip coating, usually possess a crystalline structure. Due to the lack of a comprehensive method for characterization of order in colloidal structures, a procedure is developed for such a characterization in terms of local and longer range translational and orientational order. Translational measures turn out to be adequate for characterizing small deviations from perfect order, while orientational measures are more informative for polycrystalline and highly disordered crystals. Finally, to obtain an understanding of the relationship between dynamics and structure, the dynamics of colloids in a quasi-2D suspension as a function of packing fraction is studied. The tools that are used are mean square displacement (MSD) and the self part of the van Hove function. The slow down of dynamics is observed as the packing fraction increases, accompanied with the emergence of 6-fold symmetry within the system. The dynamics turns out to be non-Gaussian at early times and Gaussian at later times for packing fractions below 0.6. Above this packing fraction, the dynamics is non-Gaussian at all times. Also the diffusion coefficient is calculated from MSD and the van Hove function. It goes down as the packing fraction is increased.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.