11 resultados para DIGITAL DATA
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
The new knowledge environments of the digital age are oen described as places where we are all closely read, with our buying habits, location, and identities available to advertisers, online merchants, the government, and others through our use of the Internet. This is represented as a loss of privacy in which these entities learn about our activities and desires, using means that were unavailable in the pre-digital era. This article argues that the reciprocal nature of digital networks means 1) that the privacy issues that we face online are not radically different from those of the pre-Internet era, and 2) that we need to reconceive of close reading as an activity of which both humans and computer algorithms are capable.
Resumo:
The occupant impact velocity (OIV) and acceleration severity index (ASI) are competing measures of crash severity used to assess occupant injury risk in full-scale crash tests involving roadside safety hardware, e.g. guardrail. Delta-V, or the maximum change in vehicle velocity, is the traditional metric of crash severity for real world crashes. This study compares the ability of the OIV, ASI, and delta-V to discriminate between serious and non-serious occupant injury in real world frontal collisions. Vehicle kinematics data from event data recorders (EDRs) were matched with detailed occupant injury information for 180 real world crashes. Cumulative probability of injury risk curves were generated using binary logistic regression for belted and unbelted data subsets. By comparing the available fit statistics and performing a separate ROC curve analysis, the more computationally intensive OIV and ASI were found to offer no significant predictive advantage over the simpler delta-V.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
The G3, CBS-QB3, and CBS-APNO methods have been used to calculate ΔH and ΔG values for deprotonation of seventeen gas-phase reactions where the experimental values are reported to be accurate within one kcal/mol. For these reactions, the mean absolute deviation of these three methods from experiment is 0.84 to 1.26 kcal/mol, and the root-mean-square deviation for ΔG and ΔH is 1.43 and 1.49 kcal/mol for the CBS-QB3 method, 1.06 and 1.14 kcal/mol for the CBS-APNO method, and 1.16 and 1.28 for the G3 method. The high accuracy of these methods makes them reliable for calculating gas-phase deprotonation reactions, and allows them to serve as a valuable check on the accuracy of experimental data reported in the National Institutes of Standards and Technology database.
Resumo:
The SVWN, BVWN, BP86, BLYP, BPW91, B3P86, B3LYP, B3PW91, B1LYP, mPW1PW, and PBE1PBE density functionals, as implemented in Gaussian 98 and Gaussian 03, were used to calculate ΔG0 and ΔH0 values for 17 deprotonation reactions where the experimental values are accurately known. The PBE1PBE and B3P86 functionals are shown to compute results with accuracy comparable to more computationally intensive compound model chemistries. A rationale for the relative performance of various functionals is explored.
Resumo:
Jahnke and Asher explore workflows and methodologies at a variety of academic data curation sites, and Keralis delves into the academic milieu of library and information schools that offer instruction in data curation. Their conclusions point to the urgent need for a reliable and increasingly sophisticated professional cohort to support data-intensive research in our colleges, universities, and research centers.
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
The accuracy of medicine use information was compared for a telephone interview and mail questionnaire, using an in-home medicine check as the standard of assessment The validity of medicine use information varied by data source, level of specificity of data, and respondent characteristics. The mail questionnaire was the more valid source of overall medicine use information. Implications for both service providers and researchers are provided.
Resumo:
Researchers examining the effects of programs, in this case a state-level pharmaceutical assistance program for the elderly, sometimes must rely on multiple methods of data collection. Two-stage data collection (e.g., a telephone interview followed by a mail questionnaire) was used to obtain a full range of information. Older age groups were found to participate less frequently in the telephone interview, while certain demographic factors characterized mail questionnaire nonparticipants, all of which supports past research. Results also show that those in the poorest health are less likely to participate in the mail survey. Combining the two methods did not result in high attrition, suggesting that innovation can be successfully employed. Knowledge of the bias associated with each method will aid in targeting special groups.
Resumo:
There is a consensus in China that industrialization, urbanization, globalization and information technology will enhance China's urban competitiveness. We have developed a methodology for the analysis of urban competitiveness that we have applied to China's 25 principal cities during three periods from 1990 through 2009. Our model uses data for 12 variables, to which we apply appropriate statistical techniques. We are able to examine the competitiveness of inland cities and those on the coast, how this has changed during the two decades of the study, the competitiveness of Mega Cities and of administrative centres, and the importance of each variable in explaining urban competitiveness and its development over time. This analysis will be of benefit to Chinese planners as they seek to enhance the competitiveness of China and its major cities in the future.