927 resultados para National Research Council Canada
Resumo:
OBJECTIVES: To determine whether the use of medications with possible and definite anticholinergic activity increases the risk of cognitive impairment and mortality in older people and whether risk is cumulative. DESIGN: A 2-year longitudinal study of participants enrolled in the Medical Research Council Cognitive Function and Ageing Study between 1991 and 1993. SETTING: Community-dwelling and institutionalized participants. PARTICIPANTS: Thirteen thousand four participants aged 65 and older. MEASUREMENTS: Baseline use of possible or definite anticholinergics determined according to the Anticholinergic Cognitive Burden Scale and cognition determined using the Mini-Mental State Examination (MMSE). The main outcome measure was decline in the MMSE score at 2 years. RESULTS: At baseline, 47% of the population used a medication with possible anticholinergic properties, and 4% used a drug with definite anticholinergic properties. After adjusting for age, sex, educational level, social class, number of nonanticholinergic medications, number of comorbid health conditions, and cognitive performance at baseline, use of medication with definite anticholinergic effects was associated with a 0.33-point greater decline in MMSE score (95% confidence interval (CI)=0.03–0.64, P=.03) than not taking anticholinergics, whereas the use of possible anticholinergics at baseline was not associated with further decline (0.02, 95% CI=-0.14–0.11, P=.79). Two-year mortality was greater for those taking definite (OR=1.68; 95% CI=1.30–2.16; P<.001) and possible (OR=1.56; 95% CI=1.36–1.79; P<.001) anticholinergics. CONCLUSION: The use of medications with anticholinergic activity increases the cumulative risk of cognitive impairment and mortality.
Resumo:
Basic literacy skills are fundamental building blocks of education, yet for a very large number of adults tasks such as understanding and using everyday items is a challenge. While research, industry, and policy-making is looking at improving access to textual information for low-literacy adults, the literacy-based demands of today's society are continually increasing. Although many community-based organizations offer resources and support to adults with limited literacy skills, current programs have difficulties reaching and retaining those that would benefit most from them. To address these challenges, the National Research Council of Canada is proposing a technological solution to support literacy programs and to assist low-literacy adults in today's information-centric society: ALEX© – Adult Literacy support application for EXperiential learning. ALEX© has been created together with low-literacy adults, following guidelines for inclusive design of mobile assistive tools. It is a mobile language assistant that is designed to be used both in the classroom and in daily life, in order to help low-literacy adults become increasingly literate and independent.
Resumo:
Requirements for space based monitoring of permafrost features had been already defined within the IGOS Cryosphere Theme Report at the start of the IPY in 2007 (IGOS, 2007). The WMO Polar Space Task Group (PSTG, http://www.wmo.int/pages/prog/sat/pstg_en.php) identified the need to review the requirements for permafrost monitoring and to update these requirements in 2013. Relevant surveys with focus on satellite data are already available from the ESA DUE Permafrost User requirements survey (2009), the United States National Research Council (2014) and the ESA - CliC - IPA - GTN -P workshop in February 2014. These reports have been reviewed and specific needs discussed within the community and a white paper submitted to the WMO PSTG. Acquisition requirements for monitoring of especially terrain changes (incl. rock glaciers and coastal erosion) and lakes (extent, ice properties etc.) with respect to current satellite missions have been specified. About 50 locations ('cold spots') where permafrost (Arctic and Antarctic) in situ monitoring has been taking place for many years or where field stations are currently established have been identified. These sites have been proposed to the WMO Polar Space Task Group as focus areas for future monitoring by high resolution satellite data. The specifications of these sites including meta-data on site instrumentation have been published as supplement to the white paper (Bartsch et al. 2014, doi:10.1594/PANGAEA.847003). The representativity of the 'cold spots' around the arctic has been in the following assessed based on a landscape units product which has been developed as part of the FP7 project PAGE21. The ESA DUE Permafrost service has been utilized to produce a pan-arctic database (25km, 2000-2014) comprising Mean Annual Surface Temperature, Annual and summer Amplitude of Surface Temperature, Mean Summer (July-August) Surface Temperature. Surface status (frozen/unfrozen) related products have been also derived from the ESA DUE Permafrost service. This includes the length of unfrozen period, first unfrozen day and first frozen day. In addition, SAR (ENVISAT ASAR GM) statistics as well as topographic parameters have been considered. The circumpolar datasets have been assessed for their redundancy in information content. 12 distinct units could be derived. The landscape units reveal similarities between North Slope Alaska and the region from the Yamal Peninsula to the Yenisei estuary. Northern Canada is characterized by the same landscape units like western Siberia. North-eastern Canada shows similarities to the Laptev coast region. This paper presents the result of this assessment and formulates recommendations for extensions of the in situ monitoring networks and categorizes the sites by satellite data requirements (specifically Sentinels) with respect to the landscape type and related processes.
Resumo:
In-situ observations on the size and shape of particles in arctic cirrus are less common than those in mid-latitude and tropical cirrus with considerable uncertainty about the contributions of small ice crystals (maximum dimension D<50 µm) to the mass and radiative properties that impact radiative forcing. In situ measurements of small ice crystals in arctic cirrus were made during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in April 2008 during transits of the National Research Council of Canada Convair-580 between Fairbanks and Barrow, Alaska and during Mixed Phase Arctic Cloud Experiment (MPACE) in October 2004 with the University of North Dakota (UND) Citation over Barrow, Alaska. Concentrations of small ice crystals with D < 50 μm from a Cloud and Aerosol Spectrometer (CAS), a Cloud Droplet Probe (CDP), a Forward Scattering Spectrometer Probe (FSSP), and a two-dimensional stereo probe (2DS) were compared as functions of the concentrations of crystals with D > 100 μm measured by a Cloud Imaging Probe (CIP) and two-dimensional stereo probe (2DS) in order to assess whether the shattering of large ice crystals on protruding components of different probes artificially amplified measurements of small ice crystal concentrations. The dependence of the probe comparison on other variables as CIP N>100 (number concentrations greater than diameter D>100 μm),temperature, relative humidity respect to ice (RHice), dominant habit from the Cloud Particle Imager (CPI), aircraft roll, pitch, true air speed and angle of attack was examined to understand potential causes of discrepancies between probe concentrations. Data collected by these probes were also compared against the data collected by a CAS, CDP and CIP during the Tropical Warm Pool-International Cloud Experiment (TWP-ICE) and by a CAS and 2DS during the Tropical Composition, Cloud and Climate Coupling (TC4) missions. During ISDAC, the CAS and FSSP both overestimated measurements of small ice crystals compared to both the CDP and 2DS by 1-2 orders of magnitude. Further, the amount of overestimation increased with the concentrations from the CIP2 (N>100 > 0.1 L-1). There was an unexplained discrepancy in concentrations of small crystals between the CDP and 2DS during ISDAC. In addition, there was a strong dependence on RHice of the average ratios of the N3-50, CAS/N3-50,CDP, N3-50, FSSP096/N3-50,CDP, N3-50, CAS/N3-50,FSSP096, N10-50, CDP/N3-50,2DS, N10-50, FSSP096/N10-50,2DS. Continued studies are needed to understand the discrepancy of these probes.
Resumo:
Respostas fisiológicas ao estresse do peixe de águas tépidas matrinxã (Brycon amazonicus) submetido à queda brusca de temperatura.
Resumo:
This paper presents a methodology for estimation of average travel time on signalized urban networks by integrating cumulative plots and probe data. This integration aims to reduce the relative deviations in the cumulative plots due to midlink sources and sinks. During undersaturated traffic conditions, the concept of a virtual probe is introduced, and therefore, accurate travel time can be obtained when a real probe is unavailable. For oversaturated traffic conditions, only one probe per travel time estimation interval—360 s or 3% of vehicles traversing the link as a probe—has the potential to provide accurate travel time.
Resumo:
Safety interventions (e.g., median barriers, photo enforcement) and road features (e.g., median type and width) can influence crash severity, crash frequency, or both. Both dimensions—crash frequency and crash severity—are needed to obtain a full accounting of road safety. Extensive literature and common sense both dictate that crashes are not created equal, with fatalities costing society more than 1,000 times the cost of property damage crashes on average. Despite this glaring disparity, the profession has not unanimously embraced or successfully defended a nonarbitrary severity weighting approach for analyzing safety data and conducting safety analyses. It is argued here that the two dimensions (frequency and severity) are made available by intelligently and reliably weighting crash frequencies and converting all crashes to property-damage-only crash equivalents (PDOEs) by using comprehensive societal unit crash costs. This approach is analogous to calculating axle load equivalents in the prediction of pavement damage: for instance, a 40,000-lb truck causes 4,025 times more stress than does a 4,000-lb car and so simply counting axles is not sufficient. Calculating PDOEs using unit crash costs is the most defensible and nonarbitrary weighting scheme, allows for the simple incorporation of severity and frequency, and leads to crash models that are sensitive to factors that affect crash severity. Moreover, using PDOEs diminishes the errors introduced by underreporting of less severe crashes—an added benefit of the PDOE analysis approach. The method is illustrated with rural road segment data from South Korea (which in practice would develop PDOEs with Korean crash cost data).
Resumo:
Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.
Resumo:
At least two important transportation planning activities rely on planning-level crash prediction models. One is motivated by the Transportation Equity Act for the 21st Century, which requires departments of transportation and metropolitan planning organizations to consider safety explicitly in the transportation planning process. The second could arise from a need for state agencies to establish incentive programs to reduce injuries and save lives. Both applications require a forecast of safety for a future period. Planning-level crash prediction models for the Tucson, Arizona, metropolitan region are presented to demonstrate the feasibility of such models. Data were separated into fatal, injury, and property-damage crashes. To accommodate overdispersion in the data, negative binomial regression models were applied. To accommodate the simultaneity of fatality and injury crash outcomes, simultaneous estimation of the models was conducted. All models produce crash forecasts at the traffic analysis zone level. Statistically significant (p-values < 0.05) and theoretically meaningful variables for the fatal crash model included population density, persons 17 years old or younger as a percentage of the total population, and intersection density. Significant variables for the injury and property-damage crash models were population density, number of employees, intersections density, percentage of miles of principal arterial, percentage of miles of minor arterials, and percentage of miles of urban collectors. Among several conclusions it is suggested that planning-level safety models are feasible and may play a role in future planning activities. However, caution must be exercised with such models.
Resumo:
A number of studies have focused on estimating the effects of accessibility on housing values by using the hedonic price model. In the majority of studies, estimation results have revealed that housing values increase as accessibility improves, although the magnitude of estimates has varied across studies. Adequately estimating the relationship between transportation accessibility and housing values is challenging for at least two reasons. First, the monocentric city assumption applied in location theory is no longer valid for many large or growing cities. Second, rather than being randomly distributed in space, housing values are clustered in space—often exhibiting spatial dependence. Recognizing these challenges, a study was undertaken to develop a spatial lag hedonic price model in the Seoul, South Korea, metropolitan region, which includes a measure of local accessibility as well as systemwide accessibility, in addition to other model covariates. Although the accessibility measures can be improved, the modeling results suggest that the spatial interactions of apartment sales prices occur across and within traffic analysis zones, and the sales prices for apartment communities are devalued as accessibility deteriorates. Consistent with findings in other cities, this study revealed that the distance to the central business district is still a significant determinant of sales price.
Resumo:
Expert panels have been used extensively in the development of the "Highway Safety Manual" to extract research information from highway safety experts. While the panels have been used to recommend agendas for new and continuing research, their primary role has been to develop accident modification factors—quantitative relationships between highway safety and various highway safety treatments. Because the expert panels derive quantitative information in a “qualitative” environment and because their findings can have significant impacts on highway safety investment decisions, the expert panel process should be described and critiqued. This paper is the first known written description and critique of the expert panel process and is intended to serve professionals wishing to conduct such panels.
Resumo:
Understanding the expected safety performance of rural signalized intersections is critical for (a) identifying high-risk sites where the observed safety performance is substantially worse than the expected safety performance, (b) understanding influential factors associated with crashes, and (c) predicting the future performance of sites and helping plan safety-enhancing activities. These three critical activities are routinely conducted for safety management and planning purposes in jurisdictions throughout the United States and around the world. This paper aims to develop baseline expected safety performance functions of rural signalized intersections in South Korea, which to date have not yet been established or reported in the literature. Data are examined from numerous locations within South Korea for both three-legged and four-legged configurations. The safety effects of a host of operational and geometric variables on the safety performance of these sites are also examined. In addition, supplementary tables and graphs are developed for comparing the baseline safety performance of sites with various geometric and operational features. These graphs identify how various factors are associated with safety. The expected safety prediction tables offer advantages over regression prediction equations by allowing the safety manager to isolate specific features of the intersections and examine their impact on expected safety. The examination of the expected safety performance tables through illustrated examples highlights the need to correct for regression-to-the-mean effects, emphasizes the negative impacts of multicollinearity, shows why multivariate models do not translate well to accident modification factors, and illuminates the need to examine road safety carefully and methodically. Caveats are provided on the use of the safety performance prediction graphs developed in this paper.
Resumo:
A study was done to develop macrolevel crash prediction models that can be used to understand and identify effective countermeasures for improving signalized highway intersections and multilane stop-controlled highway intersections in rural areas. Poisson and negative binomial regression models were fit to intersection crash data from Georgia, California, and Michigan. To assess the suitability of the models, several goodness-of-fit measures were computed. The statistical models were then used to shed light on the relationships between crash occurrence and traffic and geometric features of the rural signalized intersections. The results revealed that traffic flow variables significantly affected the overall safety performance of the intersections regardless of intersection type and that the geometric features of intersections varied across intersection type and also influenced crash type.