851 resultados para data types and operators
Resumo:
In data assimilation, one prepares the grid data as the best possible estimate of the true initial state of a considered system by merging various measurements irregularly distributed in space and time, with a prior knowledge of the state given by a numerical model. Because it may improve forecasting or modeling and increase physical understanding of considered systems, data assimilation now plays a very important role in studies of atmospheric and oceanic problems. Here, three examples are presented to illustrate the use of new types of observations and the ability of improving forecasting or modeling.
Resumo:
Turner-Fairbank Highway Research Center, McLean, Va.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Transportation Department, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
The reinforcing effects of diverse tactile stimuli were examined in this study. The study had two purposes. First, this study expanded on the Pelaez-Nogueras, Field, Gewirtz, Cigales, Gonzalez, Sanchez and Clasky (1997) finding that stroking increases infants' gaze duration, and smiling and vocalization frequencies more than tickling/poking. Instead of presenting poking and tickling as a single stimulus combination, this study separated poking and tickling in order to measure the effects of each component separately. Further, the effects of poking, tickling/tapping and stroking intensity (i.e., tactile pressure) were compared by having both mild and intense conditions. Second, this study compared the reinforcing efficacy of mother-delivered tactile stimulation to that of infant-originated tactile exploration. Twelve infants from 2- to 5-months of age participated in this study. The experiment was conducted using a repeated measures A-B-A-C-A-D reversal design. The A phases signified baselines and reversals. The B, C, and D phases consisted of alternating treatments (either mild stroking vs. mild poking vs. mild tickling/tapping, intense stroking vs. intense poking vs. intense tickling/tapping, or mother-delivered tactile stimulation vs. infant-originated tactile exploration). Three experimental hypotheses were assessed: (1) infant leg kick rate would be greater when it produced stroking or tickling/tapping (presumptive positive reinforcers), than when it produced poking (a possible punisher), regardless of tactile pressure; (2) infant leg kick rate would be greater when it produced a more intense level of stroking or tickling/tapping and lower when it produced intense poking compared to mild poking; (3) infant leg-kick rate would be greater for mother-delivered tactile stimulation than for infant-originated tactile exploration. Visual inspection and inferential statistical methods were used to analyze the results. The data supported the first two hypotheses. Mixed support emerged for the third hypothesis. This study made several important contributions to the field of psychology. First, this was the first study to quantify the pressure of tactile stimulation, via a pressure meter developed by the researcher. Additionally, the results of this study yielded valuable information about the effects of different modalities of touch. ^
Resumo:
Groundwater systems of different densities are often mathematically modeled to understand and predict environmental behavior such as seawater intrusion or submarine groundwater discharge. Additional data collection may be justified if it will cost-effectively aid in reducing the uncertainty of a model's prediction. The collection of salinity, as well as, temperature data could aid in reducing predictive uncertainty in a variable-density model. However, before numerical models can be created, rigorous testing of the modeling code needs to be completed. This research documents the benchmark testing of a new modeling code, SEAWAT Version 4. The benchmark problems include various combinations of density-dependent flow resulting from variations in concentration and temperature. The verified code, SEAWAT, was then applied to two different hydrological analyses to explore the capacity of a variable-density model to guide data collection. ^ The first analysis tested a linear method to guide data collection by quantifying the contribution of different data types and locations toward reducing predictive uncertainty in a nonlinear variable-density flow and transport model. The relative contributions of temperature and concentration measurements, at different locations within a simulated carbonate platform, for predicting movement of the saltwater interface were assessed. Results from the method showed that concentration data had greater worth than temperature data in reducing predictive uncertainty in this case. Results also indicated that a linear method could be used to quantify data worth in a nonlinear model. ^ The second hydrological analysis utilized a model to identify the transient response of the salinity, temperature, age, and amount of submarine groundwater discharge to changes in tidal ocean stage, seasonal temperature variations, and different types of geology. The model was compared to multiple kinds of data to (1) calibrate and verify the model, and (2) explore the potential for the model to be used to guide the collection of data using techniques such as electromagnetic resistivity, thermal imagery, and seepage meters. Results indicated that the model can be used to give insight to submarine groundwater discharge and be used to guide data collection. ^
Resumo:
Background: Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. Results: We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. Conclusions: We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations.
Resumo:
Three questions on the study of NO Iberian Peninsula sweat lodges are posed. First, the new sauna of Monte Ornedo (Cantabria), the review of the one of Armea (Ourense), and the Cantabrian pedra formosa type are discussed. Second, the known types of sweat lodges are reconsidered underlining the differences between the Cantabrian and the Douro - Minho groups as these differences contribute to a better assessment of the saunas located out of those territories, such as those of Monte Ornedo or Ulaca. Third, a richer record demands a more specific terminology, a larger use of archaeometric analysis and the application of landscape archaeology or art history methodologies. In this way the range of interpretation of the sweat lodges is opened, as an example an essay is proposed that digs on some already known proposals and suggests that the saunas are material metaphors of wombs whose rationale derives from ideologies and ritual practices of Indo-European tradition.
Resumo:
Background: The prevalence of Diabetes mellitus (DM) is on a rise in sub-Saharan Africa and will more than double by 2025. Cardiovascular disease (CVD) accounts for up to 2/3 of all deaths in the diabetic population. Of all the CVD deaths in DM, 3/4 occur in sub Saharan Africa (SSA). Non invasive identification of cardiac abnormalities, such as Left Ventricular Hypertrophy (LVH), diastolic and systolic dysfunction, is not part of diabetes complications surveillance programs in Uganda and there is limited data on this problem. This study sought to determine the prevalence, types and factors associated with echocardiographic abnormalities among newly diagnosed diabetic patients at Mulago National referral hospital in Uganda. Methods: In this cross sectional study conducted between June 2014 and December 2014, we recruited 202 newly diagnosed adult diabetic patients. Information on patients\' socio-demographics, bio-physical profile, biochemical testing and echocardiographic findings was obtained for all the participants using a pre-tested questionnaire. An abnormal echocardiogram in this study was defined as the presence of LVH, diastolic and/or systolic dysfunction and wall motion abnormality. Bivariate and multivariate logistic regression analyses were used to investigate the association of several parameters with echocardiographic abnormalities. Results: Of the 202 patients recruited, males were 102(50.5%) and the mean age was 46±15 years. Majority of patients had type 2 DM, 156(77.2%) and type 1 DM, 41(20.3%) with mean HbA1C of 13.9±5.3%. Mean duration of diabetes was 2 months. The prevalence of an abnormal echocardiogram was 67.8 % (95% CI 60%-74%). Diastolic dysfunction, systolic dysfunction, LVH and wall motion abnormalities were present in 55.0%, 21.8%, 19.3% and 4.0% of all the participants respectively. In bivariate logistic regression analysis, the factors associated with an abnormal echocardiogram were age (OR 1.09 [95% CI 1.06–1.12], P <0.0001), type 2 DM (OR 5.8[95% CI 2.77-12.07], P<0.0001), hypertension (OR 2.64[95% CI 1.44-4.85], P=0.002), obesity (OR 3.51[955 CI 1.25-9.84], P=0.017 and increased waist circumference (OR 1.02[95% CI 1.00-1.04], P=0.024. On Multiple logistic regression analysis, age was the only factor associated with an abnormal echocardiogram (OR 1.09[95%CI 1.05-1.15], P<0.0001). Conclusion: Echocardiographic abnormalities were common among newly diagnosed adults with DM. Traditional CVD risk factors were associated with an abnormal echocardiogram in this patient population. Due to a high prevalence of echocardiographic abnormalities among newly diagnosed diabetics, we recommend screening for cardiac disease especially in patients who present with traditional CVD risk factors. This will facilitate early diagnosis, management and hence better patient outcomes.
Resumo:
Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)