916 resultados para the National Test
Resumo:
In France, farmers commission about 250,000 soil-testing analyses per year to assist them managing soil fertility. The number and diversity of origin of the samples make these analyses an interesting and original information source regarding cultivated topsoil variability. Moreover, these analyses relate to several parameters strongly influenced by human activity (macronutrient contents, pH...), for which existing cartographic information is not very relevant. Compiling the results of these analyses into a database makes it possible to re-use these data within both a national and temporal framework. A database compilation relating to data collected over the period 1990-2009 has been recently achieved. So far, commercial soil-testing laboratories approved by the Ministry of Agriculture have provided analytical results from more than 2,000,000 samples. After the initial quality control stage, analytical results from more than 1,900,000 samples were available in the database. The anonymity of the landholders seeking soil analyses is perfectly preserved, as the only identifying information stored is the location of the nearest administrative city to the sample site. We present in this dataset a set of statistical parameters of the spatial distributions for several agronomic soil properties. These statistical parameters are calculated for 4 different nested spatial entities (administrative areas: e.g. regions, departments, counties and agricultural areas) and for 4 time periods (1990-1994, 1995-1999, 2000-2004, 2005-2009). Two kinds of agronomic soil properties are available: the firs one correspond to the quantitative variables like the organic carbon content and the second one corresponds to the qualitative variables like the texture class. For each spatial unit and temporal period, we calculated the following statistics stets: the first set is calculated for the quantitative variables and corresponds to the number of samples, the mean, the standard deviation and, the 2-,4-,10-quantiles; the second set is calculated for the qualitative variables and corresponds to the number of samples, the value of the dominant class, the number of samples of the dominant class, the second dominant class, the number of samples of the second dominant class.
Resumo:
Mode of access: Internet.
Resumo:
This study sought to examine the impact of the Cannabis Expiation Notice (CEN) scheme on the prevalence of lifetime and weekly cannabis use in South Australia. Data from five National Drug Strategy Household Surveys between 1985 and 1995 were examined to test for differences in trends in self-reported: (1) lifetime cannabis use; and (2) current weekly cannabis use, after controlling for age and gender, between South Australia and the other states and territories. Between 1985 and 1995, rates of lifetime cannabis use increased in SA from 26% to 36%. There were also significant increases in Victoria (from 26% to 32%), Tasmania (from 21% to 33%) and New South Wales (from 26% to 33%). The increase in South Australia was significantly greater than the average increase throughout the rest of Australia, but the other Australian states differed in their rates of change. Victoria and Tasmania had similar rates of increase to South Australia; New South Wales, Queensland and Western Australia showed lower rates of increase; and the Northern Territory and the Australian Capital Territory had high rates that did not change during the period. There was no statistically significant difference between SA and the rest of Australia in the rate of increase in weekly cannabis use. While there was a greater increase in self- reported lifetime cannabis use in South Australia between 1985 and 1995 than in the average of the other Australian jurisdictions it is unlikely that this increase is due to the CEN system, because similar increases occurred in Tasmania and Victoria (where there was no change in the legal status of cannabis use), and there was no increase in the rate of weekly cannabis use in South Australia over the same period.
Resumo:
To establish a noncontagious control for the Ray thioglycollate test for the detection of Perkinsus in mollusks we evaluated nonviable stages of P. olseni for enlargement of hypnospores and blue/black iodine stain. Trophozoites made nonviable with formalin, irradiation or colchicine failed to swell in thioglycollate. They remained small and did not differentially stain in iodine. Trophozoites that had already developed into hypnospores in thioglycollate were rendered inactive by freezing, ethanol or formalin immersion. They retained their iodinophilic properties and thus could provide a partial control for the Ray Test.
Resumo:
OBJECTIVE: To describe the distribution of edentulism and estimate the prevalence of functional dentition and shortened dental arch among elderly population. METHODS: A population-based epidemiological study was carried out with a sample of 5,349 respondents aged 65 to 74 years obtained from the 2002 and 2003 Brazilian Ministry of Health/Division of Oral Health survey database. The following variables were studied: gender; macroregion of residence; missing teeth; percentage that met the World Health Organization goal for oral health in the age group 65 to 74 years (50% having at least 20 natural teeth); presence of shortened dental arch; number of posterior occluding pairs of teeth. The Chi-square test assessed the association between categorical variables. The Kruskal-Wallis and Mann-Whitney tests were used to assess differences of mean between number of posterior occluding pairs teeth, macro-region and gender. RESULTS: The elderly population had an average of 5.49 teeth (SD: 7.93) with a median of 0. The proportion of completely edentulous respondents was 54.7%. Complete edentulism was 18.2% in the upper arch and 1.9% in the lower arch. The World Health Organization goal was achieved in 10% of all respondents studied. However, only 2.7% had acceptable masticatory function and aesthetics (having at least shortened dental arch) and a mean number of posterior occluding pairs of 6.94 (SD=2.97). There were significant differences of the percentage of respondents that met the World Health Organization goal and presence of shortened dental arch between men and women. There were differences in shortened dental arch between macroregions. CONCLUSIONS: The Brazilian epidemiological oral health survey showed high rate of edentulism and low rate of shortened dental arch in the elderly population studied, thus suggesting significant functional and aesthetic impairment in all Brazilian macroregions especially among women.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Ciências da Educação, pela Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa
Resumo:
It is commonly believed that majority voting enhances parties to cluster around the centre of the political space, whereas proportional systems (PR) foster great ideological divergence. The theoretical arguments for these expectations go back to the work of Downs (1957) and Duverger (1954). More recent studies, however, produced quite contradictory empirical findings. In this paper I will test whether similar arguments hold true for the positioning of candidates campaigning in different electoral systems. The elections for the two chambers of the Swiss Parliament and the data from the Swiss Electoral Studies (SELECTS) and the Swiss Voting Advice Application (VAA) smartvote offer an excellent - almost laboratory like - opportunity to do so empirically. The analyses show clearly, the theoretical claims that majority voting necessarily fosters more moderate positions find no support. The candidates for the Council of States, elected in a majority system, are not more moderate than their fellow party candidates for the National Council which are elected in a PR system.
Resumo:
This research was conducted to investigate whether negative brand associations attached to Russian hockey players impact their draft rankings during the National Hockey League (NHL) Entry Draft. A quantitative analysis based on various regression model specifications was used to test whether Russian players were drafted relatively equally to their counterparts in the NHL Entry Draft. The data consisted of the NHL draft picks between 1993 and 2013 and their performance statistics and physical characteristics. The results suggested that Russian players were drafted relatively equal to their counterparts from other countries. Meanwhile, Russian players who played in the CHL before the draft are actually drafted better than Canadians who played in the same league. Hence, the negative brand associations attached to Russians were unlikely to impact their draft rankings. This study redefined the so-called “Russian Factor” from a notion that allegedly damages Russian players’ rankings to one that enhances their rankings.
Resumo:
Background and Purpose: Oropharyngeal dysphagia is a common manifestation in acute stroke. Aspiration resulting from difficulties in swallowing is a symptom that should be considered due to the frequent occurrence of aspiration pneumonia that could influence the patient's recovery as it causes clinical complications and could even lead to the patient's death. The early clinical evaluation of swallowing disorders can help define approaches and avoid oral feeding, which may be detrimental to the patient. This study aimed to create an algorithm to identify patients at risk of developing dysphagia following acute ischemic stroke in order to be able to decide on the safest way of feeding and minimize the complications of stroke using the National Institutes of Health Stroke Scale (NHISS). Methods: Clinical assessment of swallowing was performed in 50 patients admitted to the emergency unit of the University Hospital, Faculty of Medicine of Ribeirao Preto, Sao Paulo, Brazil, with a diagnosis of ischemic stroke, within 48 h after the beginning of symptoms. Patients, 25 females and 25 males with a mean age of 64.90 years (range 26-91 years), were evaluated consecutively. An anamnesis was taken before the patient's participation in the study in order to exclude a prior history of deglutition difficulties. For the functional assessment of swallowing, three food consistencies were used, i.e. pasty, liquid and solid. After clinical evaluation, we concluded whether there was dysphagia. For statistical analysis we used the Fisher exact test, verifying the association between the variables. To assess whether the NIHSS score characterizes a risk factor for dysphagia, a receiver operational characteristics curve was constructed to obtain characteristics for sensitivity and specificity. Results: Dysphagia was present in 32% of the patients. The clinical evaluation is a reliable method of detection of swallowing difficulties. However, the predictors of risk for the swallowing function must be balanced, and the level of consciousness and the presence of preexisting comorbidities should be considered. Gender, age and cerebral hemisphere involved were not significantly associated with the presence of dysphagia. NIHSS, Glasgow Coma Scale, and speech and language changes had a statistically significant predictive value for the presence of dysphagia. Conclusions: The NIHSS is highly sensitive (88%) and specific (85%) in detecting dysphagia; a score of 12 may be considered as the cutoff value. The creation of an algorithm to detect dysphagia in acute ischemic stroke appears to be useful in selecting the optimal feeding route while awaiting a specialized evaluation. Copyright (C) 2012 S. Karger AG, Basel
Resumo:
Purpose. The measurement of quality of life has become an important topic in healthcare and in the allocation of limited healthcare resources. Improving the quality of life (QOL) in cancer patients is paramount. Cataract removal and lens implantation appears to improve patient well-being of cancer patients, though a formal measurement has never been published in the US literature. In this current study, National Eye Institute Visual Functioning Questionnaire (NEI-VFQ-25), a validated vision quality of life metric, was used to study the change in vision-related quality of life in cancer patients who underwent cataract extraction with intraocular lens implantation. ^ Methods. Under an IRB approved protocol, cancer patients who underwent cataract surgery with intraocular lens implantation (by a single surgeon) from December 2008 to March 2011, and who had completed a pre- and postoperative NEI-VFQ-25 were retrospectively reviewed. Post-operative data was collected at their routine 4-6 week post-op visit. Patients' demographics, cancer history, their pre and postoperative ocular examinations, visual acuities, and NEI-VFQ-25 with twelve components were included in the evaluation. The responses were evaluated using the Student t test, Spearman correlation and Wilcoxon signed rank test. ^ Results. 63 cases of cataract surgery (from 54 patients) from the MD Anderson Cancer Center were included in the study. Cancer patients had a significant improvement in the visual acuity (P<0.0001) postoperatively, along with a significant increase in vision-related quality of life (P<0.0001). Patients also had a statistically significant improvement in ten of the twelve subcategories which are addressed in the NEI-VFQ-25. ^ Conclusions. In our study, cataract extraction and intraocular implantation showed a significant impact on the vision-related quality of life in cancer patients. Although this study includes a small sample size, it serves as a positive pilot study to evaluate and quantify the impact of a surgical intervention on QOL in cancer patients and may help to design a larger study to measure vision related QOL per dollar spent for health care cost in cancer patients.^
Resumo:
Background: Flu vaccine composition is reformulated on a yearly basis. As such, the vaccine effectiveness (VE) from previous seasons cannot be considered for subsequent years, and it is necessary to monitor the VE for each season. This study (MonitorEVA- monitoring vaccine effectiveness) intends to evaluate the feasibility of using the national influenza surveillance system (NISS) for monitoring the influenza VE. Material and methods: Data was collected within NISS during 2004 to 2014 seasons. We used a case-control design where laboratory confirmed incident influenza like illness (ILI) patients (cases) were compared to controls (ILI influenza negative). Eligible individuals consisted on all aged individuals that consult a general practitioner or emergency room with ILI symptoms with a swab collected within seven days of symptoms onset. VE was estimated as 1- odds ratio of being vaccinated in cases versus controls adjusted for age and month of onset by logistic regression. Sensitivity analyses were conducted to test possible effect of assumptions on vaccination status, ILI definition and timing of swabs (<3 days after onset). Results: During the 2004-2014 period, a total of 5302 ILI patients were collected but 798 ILI were excluded for not complying with inclusion criteria. After data restriction the sample size in both groups was higher than 148 individuals/ season; minimum sample size needed to detect a VE of at least 50% considering a level of significance of 5% and 80% power. Crude VE point estimates were under 45% in 2004/05, 2005/06, 2011/12 and 2013/14 season; between 50%-70% in 2006/07, 2008/09 and 2010/11 seasons, and above 70% in 2007/08 and 2012/13 season. From season 2006/07 to 2013/14, all crude VE estimates were statistically significant. After adjustment for age group and month of onset, the VE point estimates decreased and only 2008/09, 2012/13 and 2013/14 seasons were significant. Discussion and Conclusions: MonitorEVA was able to provide VE estimates for all seasons, including the pandemic, indicating if the VE was higher than 70% and less than 50%. When comparing with other observational studies, MonitorEVA estimates were comparable but less precise and VE estimates were in accordance with the antigenic match of the circulating virus/ vaccine strains. Given the sensitivity results, we propose a MonitorEVA based on: a) Vaccination status defined independently of number of days between vaccination and symptoms onset; b) use of all ILI data independent of the definition; c) stratification of VE according to time between onset and swab (< 3 and ≥3 days).
Resumo:
The National Council Licensure Examination for Registered Nurses (NCLEX-RN) is the examination that all graduates of nursing education programs must pass to attain the title of registered nurse. Currently the NCLEX-RN passing rate is at an all-time low (81%) for first-time test takers (NCSBN, 2004); amidst a nationwide shortage of registered nurses (Glabman, 2001). Because of the critical need to supply greater numbers of professional nurses, and the potential accreditation ramifications that low NCLEX-RN passing rates can have on schools of nursing and graduates, this research study tests the effectiveness of a predictor model. This model is based upon the theoretical framework of McClusky's (1959) theory of margin (ToM), with the hope that students found to be at-risk for NCLEX-RN failure can be identified and remediated prior to taking the actual licensure examination. To date no theory based predictor model has been identified that predicts success on the NCLEX-RN. ^ The model was tested using prerequisite course grades, nursing course grades and scores on standardized examinations for the 2003 associate degree nursing graduates at a urban community college (N = 235). Success was determined through the reporting of pass on the NCLEX-RN examination by the Florida Board of Nursing. Point biserial correlations tested model assumptions regarding variable relationships, while logistic regression was used to test the model's predictive power. ^ Correlations among variables were significant and the model accounted for 66% of variance in graduates' success on the NCLEX-RN with 98% prediction accuracy. Although certain prerequisite course grades and nursing course grades were found to be significant to NCLEX-RN success, the overall model was found to be most predictive at the conclusion of the academic program of study. The inclusion of the RN Assessment Examination, taken during the final semester of course work, was the most significant predictor of NCLEX-RN success. Success on the NCLEX-RN allows graduates to work as registered nurses, reflects positively on a school's academic performance record, and supports the appropriateness of the educational program's goals and objectives. The study's findings support potential other uses of McClusky's theory of margin as a predictor of program outcome in other venues of adult education. ^
Resumo:
The prevalence of Arterial Hypertension (AHT) has increased worldwide and preventive measures areinsufficient since only one third of the population is being treated. AHT is the primary cause of morbidity andmortality in the world. In this article is presented the first study on hypertension levels of personnel of aDistance Education university based on the analysis of all medical consultations in the Costa Rican StateUniversity for Distance Education (Universidad Estatal a Distancia-UNED) as of December 15, 2007 (1,526medical files). The population studied ranges from 20 to 70 years of age and is comprised of residents of theGreater Metropolitan Area (Costa Rica) with varied socioeconomic and academic levels. The StatgraphicsCenturion XV software and the chi-square test were used to analyze variables such as treatment administered,sex, age, and type of work. Only 45 patients knew that they suffered from hypertension prior to theirconsultation with the university medical service and 136 were treated with Enalapril and Hydrochlorothiazide.The number of hypertensive patients is higher among those who have worked at the institution for more than 20years, especially in those holding higher positions. No marked differences were found between men andwomen. It is concluded that the existence of a university medical service has permitted faculty and staff tosatisfactorily control their blood pressure.
Resumo:
The aim of this study was to evaluate the ability of the BANA Test to detect different levels of Porphyromonas gingivalis, Treponema denticola and Tannerella forsythia or their combinations in subgingival samples at the initial diagnosis and after periodontal therapy. Periodontal sites with probing depths between 5-7 mm and clinical attachment level between 5-10 mm, from 53 subjects with chronic periodontitis, were sampled in four periods: initial diagnosis (T0), immediately (T1), 45 (T2) and 60 days (T3) after scaling and root planing. BANA Test and Checkerboard DNA-DNA hybridization identified red complex species in the subgingival biofilm. In all experimental periods, the highest frequencies of score 2 (Checkerboard DNA-DNA hybridization) for P. gingivalis, T. denticola and T. forsythia were observed when strong enzymatic activity (BANA) was present (p < 0.01). The best agreement was observed at initial diagnosis. The BANA Test sensitivity was 95.54% (T0), 65.18% (T1), 65.22% (T2) and 50.26% (T3). The specificity values were 12.24% (T0), 57.38% (T1), 46.27% (T2) and 53.48% (T3). The BANA Test is more effective for the detection of red complex pathogens when the bacterial levels are high, i.e. in the initial diagnosis of chronic periodontitis.