963 resultados para Link variables method
Resumo:
For wireless power transfer (WPT) systems, communication between the primary side and the pickup side is a challenge because of the large air gap and magnetic interferences. A novel method, which integrates bidirectional data communication into a high-power WPT system, is proposed in this paper. The power and data transfer share the same inductive link between coreless coils. Power/data frequency division multiplexing technique is applied, and the power and data are transmitted by employing different frequency carriers and controlled independently. The circuit model of the multiband system is provided to analyze the transmission gain of the communication channel, as well as the power delivery performance. The crosstalk interference between two carriers is discussed. In addition, the signal-to-noise ratios of the channels are also estimated, which gives a guideline for the design of mod/demod circuits. Finally, a 500-W WPT prototype has been built to demonstrate the effectiveness of the proposed WPT system.
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
Because past research has shown faculty as the driving force affecting student academic library use, librarians have tried for decades to engage classroom faculty in library activities. Nevertheless, a low rate of library use by faculty on behalf of their students persists. This study investigated the organizational culture dimensions affecting library faculty demand at a community college. The study employed a sequential quantitative-qualitative research design. A random sample of full-time faculty at a large urban community college responded to a 46-item survey. The survey data showed strong espoused support (84%) for the use of library-based materials but a much lower incidence of putting this construct into practice (46%). Interviews were conducted with 11 full-time faculty from two academic groups, English-Humanities and Engineering-Math-Science. These groups were selected because the survey data resulted in statistically significant differences between the groups pertaining to several key variables. These variables concerned the professors' perceptions of the importance of library research in their discipline, the amount of time spent on the course textbook during a term, the frequency of conversations about the library in the academic department, and the professors' ratings of the librarians' skill in instruction related to the academic discipline. All interviewees described the student culture as the predominant organizational culture at Major College. Although most interview subjects held to high information literacy standards in their courses, others were less convinced these could be realistically practiced, based on a perception of students' poor academic skills, lack of time for students to complete assignments due to their commuter and family responsibilities, and the need to focus on textbook content. Recommended future research would involve investigation of methods to bridge the gap between high espoused value toward information literacy and implementation of information-literate coursework.
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
This study examined the predictive merits of selected cognitive and noncognitive variables on the national Registry exam pass rate using 2008 graduates (n = 175) from community college radiography programs in Florida. The independent variables included two GPAs, final grades in five radiography courses, self-efficacy, and social support. The dependent variable was the first-attempt results on the national Registry exam. The design was a retrospective predictive study that relied on academic data collected from participants using the self-report method and on perceptions of students' success on the national Registry exam collected through a questionnaire developed and piloted in the study. All independent variables except self-efficacy and social support correlated with success on the national Registry exam ( p < .01) using the Pearson Product-Moment Correlation analysis. The strongest predictor of the national Registry exam success was the end-of-program GPA, r = .550, p < .001. The GPAs and scores for self-efficacy and social support were entered into a logistic regression analysis to produce a prediction model. The end-of-program GPA (p = .015) emerged as a significant variable. This model predicted 44% of the students who failed the national Registry exam and 97.3% of those who passed, explaining 45.8% of the variance. A second model included the final grades for the radiography courses, self efficacy, and social support. Three courses significantly predicted national Registry exam success; Radiographic Exposures, p < .001; Radiologic Physics, p = .014; and Radiation Safety & Protection, p = .044, explaining 56.8% of the variance. This model predicted 64% of the students who failed the national Registry exam and 96% of those who passed. The findings support the use of in-program data as accurate predictors of success on the national Registry exam.
Resumo:
This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.
Resumo:
In this dissertation, I present an integrated model of organizational performance. Most prior research has relied extensively on testing individual linkages, often with cross-sectional data. In this dissertation, longitudinal unit-level data from 559 restaurants, collected over a one-year period, were used to test the proposed model. The model was hypothesized to begin with employee satisfaction as a key antecedent that would ultimately lead to improved financial performance. Several variables including turnover, efficiency, and guest satisfaction are proposed as mediators of the satisfaction-performance relationship. The current findings replicate and extend past research using individual-level data. The overall model adequately explained the data, but was significantly improved with an additional link from employee satisfaction to efficiency, which was not originally hypothesized. Management turnover was a strong predictor of hourly level team turnover, and both were significant predictors of efficiency. Full findings for each hypothesis are presented and practical organizational implications are given. Limitations and recommendations for future research are provided. ^
Resumo:
The ability of the United States Air Force (USAF) to sustain a high level of operational ability and readiness is dependent on the proficiency and expertise of its pilots. Recruitment, education, training, and retention of its pilot force are crucial factors in the USAF's attainment of its operational mission: defense of this nation and its allies. Failure of a student pilot during a training program does not only represent a loss of costly training expenditures to the American public, but often consists of loss of human life, aircraft, and property. This research focused on the Air Force Reserve Officer Training Corps' (AFROTC) selection method for student pilots for the light aircraft training (LATR) program. The LATR program is an intense 16 day flight training program that precedes the Air Force's undergraduate pilot training (UPT) program. The study subjects were 265 AFROTC cadets in the LATR program. A variety of independent variables from each subject's higher education curricular background as well as results of preselection tests, participation in varsity athletics, prior flying experience and gender were evaluated against subsequent performance in LATR. Performance was measured by a quantitative performance score developed by this researcher based on 28 graded training factors as well as overall pass or fail of the LATR program. Study results showed participation in university varsity athletics was very significantly and positively related to performance in the LATR program, followed by prior flying experience and to a very slight degree portions of the Air Force Officers Qualifying Test. Not significantly related to success in the LATR program were independent variables such as grade point average, scholastic aptitude test scores, academic major, gender and the AFROTC selection and ranking system.
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
This data sets contains LPJ-LMfire dynamic global vegetation model output covering Europe and the Mediterranean for the Last Glacial Maximum (LGM; 21 ka) and for a preindustrial control simulation (20th century detrended climate). The netCDF data files are time averages of the final 30 years of the model simulation. Each netCDF file contains four or five variables: fractional cover of 9 plant functional types (PFTs; cover), total fractional coverage of trees (treecover), population density of hunter-gatherers (foragerPD; only for the "people" simulations), fraction of the gridcell burned on 30-year average (burnedf), and vegetation net primary productivity (NPP). The model spatial resolution is 0.5-degrees For the LGM simulations, LPJ-LMfire was driven by the PMIP3 suite of eight GCMs for which LGM climate simulations were available. Also provided in this archive is the result of an LPJ-LMfire run that was forced by the average climate of all GCMs (the "GCM-mean" files), and the average of each of the individual LPJ-LMfire runs over the eight LGM scenarios individually (the "LPJ-mean" files). The model simulations are provided that include the influence of human presence on the landscape (the "people" files), and in a "world without humans" scenario (the "natural" files). Finally this archive contains the preindustrial reference simulation with and without human influence ("PI_reference_people" and "PI_reference_nat", respectively). There are therefore 22 netCDF files in this archive: 8 each of LGM simulations with and without people (total 16) and the "GCM mean" simulation (2 files) and the "LPJ mean" aggregate (2 files), and finally the two preindustrial "control" simulations ("PI"), with and without humans (2 files). In addition to the LPJ-LMfire model output (netCDF files), this archive also contains a table of arboreal pollen percent calculated from pollen samples dated to the LGM at sites throughout (lgmAP.txt), and a table containing the location of archaeological sites dated to the LGM (LGM_archaeological_site_locations.txt).
Resumo:
This study subdivides the Weddell Sea, Antarctica, into seafloor regions using multivariate statistical methods. These regions are categories used for comparing, contrasting and quantifying biogeochemical processes and biodiversity between ocean regions geographically but also regions under development within the scope of global change. The division obtained is characterized by the dominating components and interpreted in terms of ruling environmental conditions. The analysis uses 28 environmental variables for the sea surface, 25 variables for the seabed and 9 variables for the analysis between surface and bottom variables. The data were taken during the years 1983-2013. Some data were interpolated. The statistical errors of several interpolation methods (e.g. IDW, Indicator, Ordinary and Co-Kriging) with changing settings have been compared for the identification of the most reasonable method. The multivariate mathematical procedures used are regionalized classification via k means cluster analysis, canonical-correlation analysis and multidimensional scaling. Canonical-correlation analysis identifies the influencing factors in the different parts of the cove. Several methods for the identification of the optimum number of clusters have been tested. For the seabed 8 and 12 clusters were identified as reasonable numbers for clustering the Weddell Sea. For the sea surface the numbers 8 and 13 and for the top/bottom analysis 8 and 3 were identified, respectively. Additionally, the results of 20 clusters are presented for the three alternatives offering the first small scale environmental regionalization of the Weddell Sea. Especially the results of 12 clusters identify marine-influenced regions which can be clearly separated from those determined by the geological catchment area and the ones dominated by river discharge.
Resumo:
This paper provides a method for constructing a new historical global nitrogen fertilizer application map (0.5° × 0.5° resolution) for the period 1961-2010 based on country-specific information from Food and Agriculture Organization statistics (FAOSTAT) and various global datasets. This new map incorporates the fraction of NH+4 (and NONO-3) in N fertilizer inputs by utilizing fertilizer species information in FAOSTAT, in which species can be categorized as NH+4 and/or NO-3-forming N fertilizers. During data processing, we applied a statistical data imputation method for the missing data (19 % of national N fertilizer consumption) in FAOSTAT. The multiple imputation method enabled us to fill gaps in the time-series data using plausible values using covariates information (year, population, GDP, and crop area). After the imputation, we downscaled the national consumption data to a gridded cropland map. Also, we applied the multiple imputation method to the available chemical fertilizer species consumption, allowing for the estimation of the NH+4/NO-3 ratio in national fertilizer consumption. In this study, the synthetic N fertilizer inputs in 2000 showed a general consistency with the existing N fertilizer map (Potter et al., 2010, doi:10.1175/2009EI288.1) in relation to the ranges of N fertilizer inputs. Globally, the estimated N fertilizer inputs based on the sum of filled data increased from 15 Tg-N to 110 Tg-N during 1961-2010. On the other hand, the global NO-3 input started to decline after the late 1980s and the fraction of NO-3 in global N fertilizer decreased consistently from 35 % to 13 % over a 50-year period. NH+4 based fertilizers are dominant in most countries; however, the NH+4/NO-3 ratio in N fertilizer inputs shows clear differences temporally and geographically. This new map can be utilized as an input data to global model studies and bring new insights for the assessment of historical terrestrial N cycling changes.