814 resultados para Explanatory Variables Effect
Resumo:
This paper presents a study of the effects of alcohol consumption on household income in Ireland using the Slán National Health and Lifestyle Survey 2007 dataset, accounting for endogeneity and selection bias. Drinkers are categorised into one of four categories based on the recommended weekly drinking levels by the Irish Health Promotion Unit; those who never drank, non-drinkers, moderate and heavy drinkers. A multinomial logit OLS Two Step Estimate is used to explain individual's choice of drinking status and to correct for selection bias which would result in the selection into a particular category of drinking being endogenous. Endogeneity which may arise through the simultaneity of drinking status and income either due to the reverse causation between the two variables, income affecting alcohol consumption or alcohol consumption affecting income, or due to unobserved heterogeneity, is addressed. This paper finds that the household income of drinkers is higher than that of non-drinkers and of those who never drank. There is very little difference between the household income of moderate and heavy drinkers, with heavy drinkers earning slightly more. Weekly household income for those who never drank is €454.20, non-drinkers is €506.26, compared with €683.36 per week for moderate drinkers and €694.18 for heavy drinkers.
Resumo:
There appears to be a limited but growing body of research on the sequential analysis/treatment of multiple types of evidence. The development of an integrated forensic approach is necessary to maximise evidence recovery and to ensure that a particular treatment is not detrimental to other types of evidence. This study aims to assess the effect of latent and blood mark enhancement techniques (e.g. fluorescence, ninhydrin, acid violet 17, black iron-oxide powder suspension) on the subsequent detection of saliva. Saliva detection was performed by means of a presumptive test (Phadebas®) in addition to analysis by a rapid stain identification (RSID) kit test and confirmatory DNA testing. Additional variables included a saliva depletion series and a number of different substrates with varying porosities as well as different ageing periods. Examination and photography under white light and fluorescence was carried out prior to and after chemical enhancement All enhancement techniques (except Bluestar® Forensic Magnum luminol) employed in this study resulted in an improved visualisation of the saliva stains, although the inherent fluorescence of saliva was sometimes blocked after chemical treatment. The use of protein stains was, in general, detrimental to the detection of saliva. Positive results were less pronounced after the use of black iron-oxide powder suspension, cyanoacrylate fuming followed by BY40 and ninhydrin when compared to the respective positive controls. The application of Bluestar® Forensic Magnum luminol and black magnetic powder proved to be the least detrimental, with no significant difference between the test results and the positive controls. The use of non-destructive fluorescence examination provided good visualisation; however, only the first few marks in the depletion were observed. Of the samples selected for DNA analysis only depletion 1 samples contained sufficient DNA quantity for further processing using standard methodology. The 28 day delay between sample deposition and collection resulted in a 5-fold reduction in the amount of useable DNA. When sufficient DNA quantities were recovered, enhancement techniques did not have a detrimental effect on the ability to generate DNA profiles. This study aims to contribute to a strategy for maximising evidence recovery and efficiency for the detection of latent marks and saliva. The results demonstrate that most of the enhancement techniques employed in this study were not detrimental to the subsequent detection of saliva by means of presumptive, confirmative and DNA tests.
Resumo:
The purpose of this study was to examine relationships between multiple characteristics of maternal employment, parenting practices, and adolescents’ transition outcomes to young adulthood. The research addressed four main research questions. First, are the characteristics of maternal work (i.e., hours worked, multiple jobs held, work schedules, earnings, and occupation) related to adolescents’ enrollment in post-secondary education, employment, or involvement in neither of these types of activities as young adults? Second, are the work characteristics related to parental involvement and monitoring, and are the parenting practices related to adolescents’ transition outcomes? Third, do parental involvement and monitoring mediate any relationships between the characteristics of maternal employment and adolescents’ transition outcomes? Finally, do any associations between characteristics of maternal employment and parenting practices and adolescents’ transition outcomes vary by poverty status, race/ethnicity, or gender? To address these research questions, secondary data analysis was conducted, using data from the National Longitudinal Survey of Youth (NLSY) from 1998 through 2004. The study sample consisted of 849 youths who were 15 through 17 years of age in either 1998 or 2000, and were 19 through 21 years of age when their transition outcomes in young adulthood were measured four years later. Multinomial logistic and ordinary least squares regression models were estimated to answer the research questions. Study findings indicated that of the maternal work characteristics, mothers’ multiple jobs held, occupation, and work schedule were significantly related to the youths’ transition outcomes. When mothers held multiple jobs for 1 to 25 weeks per year, and when mothers held jobs involving lower levels of occupational complexity, their youths were more likely to experience employment rather than post-secondary education. Adolescents whose mothers worked a standard work schedule were less likely to experience other types of transitions than post-secondary education. With regard to the effects of maternal employment on parenting practices, none of the maternal work variables were related to parental involvement, and only one variable, mothers working less than 40 hours per week, was negatively related to parental monitoring. In addition, when parents were more involved with their youths’ education, the youths were less likely to transition into employment and other types of transitions rather than post-secondary education. The parenting practices did not mediate the relation between the significant work variables (holding multiple jobs, work schedule, and occupation) and youths’ transition outcomes. Finally, none of the interactions between maternal work characteristics and poverty status, race/ethnicity, and gender met the criteria for determining significance; but in a series of sub-group analyses, some differences according to poverty status and gender were found. Despite the lack of mediation and moderation, the findings of this study have important implications for social policy and social work intervention. Based on the findings, suggestions are made in these areas to improve working mothers’ lives and their adolescents’ development and successful transition to adulthood. Finally, directions for future research are discussed.
Resumo:
A great deal of scholarly research has addressed the issue of dialect mapping in the United States. These studies, usually based on phonetic or lexical items, aim to present an overall picture of the dialect landscape. But what is often missing in these types of projects is an attention to the borders of a dialect region and to what kinds of identity alignments can be found in such areas. This lack of attention to regional and dialect border identities is surprising, given the salience of such borders for many Americans. This salience is also ignored among dialectologists, as nonlinguists‟ perceptions and attitudes have been generally assumed to be secondary to the analysis of “real” data, such as the phonetic and lexical variables used in traditional dialectology. Louisville, Kentucky is considered as a case study for examining how dialect and regional borders in the United States impact speakers‟ linguistic acts of identity, especially the production and perception of such identities. According to Labov, Ash, and Boberg (2006), Louisville is one of the northernmost cities to be classified as part of the South. Its location on the Ohio River, on the political and geographic border between Kentucky and Indiana, places Louisville on the isogloss between Southern and Midland dialects. Through an examination of language attitude surveys, mental maps, focus group interviews, and production data, I show that identity alignments in borderlands are neither simple nor straightforward. Identity at the border is fluid, complex, and dynamic; speakers constantly negotiate and contest their identities. The analysis shows the ways in which Louisvillians shift between Southern and non-Southern identities, in the active and agentive expression of their amplified awareness of belonging brought about by their position on the border.
Resumo:
Exhaled breath (EB) and exhaled breath condensate (EBC) contain numerous volatile gases and a wide-array of non-volatile compounds, several of which have been investigated as markers of lower airway inflammation in human and veterinary medicine and have been used to diagnose and monitor diseases associated with pulmonary inflammation. The identification of reliable biomarkers within EB and EBC is an active research focus with the common goal of establishing non-invasive and repeatable assessment of respiratory health and disease in mammals. The application of EB and EBC analysis holds considerable appeal in the investigation of respiratory disease in Thoroughbred racehorses, as inflammatory airway disease (IAD) is a common cause for poor performance in this population of animals. This study documented that EB and EBC samples can be safely collected from Thoroughbred racehorses in their own environment, without adverse effect or interference with the horse’s training regimen. The use of off-line collection and analysis of exhaled gases via chemiluminescence is suitable for the measurement of exhaled carbon monoxide, but is not appropriate for analyzing exhaled nitric oxide in horses. Significant changes in the concentration of exhaled CO and the pH of EBC occurred in response to strenuous exercise and when exercising in different environmental temperatures. Exhaled CO was associated with tracheal mucus score (and the number of neutrophils in the mucus) and EBC pH was significantly different in horses with evidence of neutrophilic IAD compared to horses without IAD. Numerous physiological and environmental variables were identified as confounding factors in the assessment of both exhaled CO and EBC pH, with respiratory rate prior to EB collection, and during EBC collection, consistently identified as an explanatory variable influencing the concentration of exhaled biomarkers. Further studies in EB and EBC analysis in horses need to focus on objectively accounting for key respiratory dynamics during sample collection.
Resumo:
The underwater environment is an extreme environment that requires a process of human adaptation with specific psychophysiological demands to ensure survival and productive activity. From the standpoint of existing models of intelligence, personality and performance, in this explanatory study we have analyzed the contribution of individual differences in explaining the adaptation of military personnel in a stressful environment. Structural equation analysis was employed to verify a model representing the direct effects of psychological variables on individual adaptation to an adverse environment, and we have been able to confirm, during basic military diving courses, the structural relationships among these variables and their ability to predict a third of the variance of a criterion that has been studied very little to date. In this way, we have confirmed in a sample of professionals (N = 575) the direct relationship of emotional adjustment, conscientiousness and general mental ability with underwater adaptation, as well as the inverse relationship of emotional reactivity. These constructs are the psychological basis for working under water, contributing to an improved adaptation to this environment and promoting risk prevention and safety in diving activities.
Resumo:
Roads represent a new source of mortality due to animal-vehicle risk of collision threatening log-term populations’ viability. Risk of road-kill depends on species sensitivity to roads and their specific life-history traits. The risk of road mortality for each species depends on the characteristics of roads and bioecological characteristics of the species. In this study we intend to know the importance of climatic parameters (temperature and precipitation) together with traffic and life history traits and understand the role of drought in barn owl population viability, also affected by road mortality in three scenarios: high mobility, high population density and the combination of previous scenarios (mixed) (Manuscript). For the first objective we correlated the several parameters (climate, traffic and life history traits). We used the most correlated variables to build a predictive mixed model (GLMM) the influence of the same. Using a population model we evaluated barn owl population viability in all three scenarios. Model revealed precipitation, traffic and dispersal have negative relationship with road-kills, although the relationship was not significant. Scenarios showed different results, high mobility scenario showed greater population depletion, more fluctuations over time and greater risk of extinction. High population density scenario showed a more stable population with lower risk of extinction and mixed scenario showed similar results as first scenario. Climate seems to play an indirect role on barn owl road-kills, it may influence prey availability which influences barn owl reproductive success and activity. Also, high mobility scenario showed a greater negative impact on viability of populations which may affect their ability and resilience to other stochastic events. Future research should take in account climate and how it may influence species life cycles and activity periods for a more complete approach of road-kills. Also it is important to make the best mitigation decisions which might include improving prey quality habitat.
Resumo:
This thesis describes a collection of studies into the electrical response of a III-V MOS stack comprising metal/GaGdO/GaAs layers as a function of fabrication process variables and the findings of those studies. As a result of this work, areas of improvement in the gate process module of a III-V heterostructure MOSFET were identified. Compared to traditional bulk silicon MOSFET design, one featuring a III-V channel heterostructure with a high-dielectric-constant oxide as the gate insulator provides numerous benefits, for example: the insulator can be made thicker for the same capacitance, the operating voltage can be made lower for the same current output, and improved output characteristics can be achieved without reducing the channel length further. It is known that transistors composed of III-V materials are most susceptible to damage induced by radiation and plasma processing. These devices utilise sub-10 nm gate dielectric films, which are prone to contamination, degradation and damage. Therefore, throughout the course of this work, process damage and contamination issues, as well as various techniques to mitigate or prevent those have been investigated through comparative studies of III-V MOS capacitors and transistors comprising various forms of metal gates, various thicknesses of GaGdO dielectric, and a number of GaAs-based semiconductor layer structures. Transistors which were fabricated before this work commenced, showed problems with threshold voltage control. Specifically, MOSFETs designed for normally-off (VTH > 0) operation exhibited below-zero threshold voltages. With the results obtained during this work, it was possible to gain an understanding of why the transistor threshold voltage shifts as the gate length decreases and of what pulls the threshold voltage downwards preventing normally-off device operation. Two main culprits for the negative VTH shift were found. The first was radiation damage induced by the gate metal deposition process, which can be prevented by slowing down the deposition rate. The second was the layer of gold added on top of platinum in the gate metal stack which reduces the effective work function of the whole gate due to its electronegativity properties. Since the device was designed for a platinum-only gate, this could explain the below zero VTH. This could be prevented either by using a platinum-only gate, or by matching the layer structure design and the actual gate metal used for the future devices. Post-metallisation thermal anneal was shown to mitigate both these effects. However, if post-metallisation annealing is used, care should be taken to ensure it is performed before the ohmic contacts are formed as the thermal treatment was shown to degrade the source/drain contacts. In addition, the programme of studies this thesis describes, also found that if the gate contact is deposited before the source/drain contacts, it causes a shift in threshold voltage towards negative values as the gate length decreases, because the ohmic contact anneal process affects the properties of the underlying material differently depending on whether it is covered with the gate metal or not. In terms of surface contamination; this work found that it causes device-to-device parameter variation, and a plasma clean is therefore essential. This work also demonstrated that the parasitic capacitances in the system, namely the contact periphery dependent gate-ohmic capacitance, plays a significant role in the total gate capacitance. This is true to such an extent that reducing the distance between the gate and the source/drain ohmic contacts in the device would help with shifting the threshold voltages closely towards the designed values. The findings made available by the collection of experiments performed for this work have two major applications. Firstly, these findings provide useful data in the study of the possible phenomena taking place inside the metal/GaGdO/GaAs layers and interfaces as the result of chemical processes applied to it. In addition, these findings allow recommendations as to how to best approach fabrication of devices utilising these layers.
Resumo:
Abstract: Quantitative Methods (QM) is a compulsory course in the Social Science program in CEGEP. Many QM instructors assign a number of homework exercises to give students the opportunity to practice the statistical methods, which enhances their learning. However, traditional written exercises have two significant disadvantages. The first is that the feedback process is often very slow. The second disadvantage is that written exercises can generate a large amount of correcting for the instructor. WeBWorK is an open-source system that allows instructors to write exercises which students answer online. Although originally designed to write exercises for math and science students, WeBWorK programming allows for the creation of a variety of questions which can be used in the Quantitative Methods course. Because many statistical exercises generate objective and quantitative answers, the system is able to instantly assess students’ responses and tell them whether they are right or wrong. This immediate feedback has been shown to be theoretically conducive to positive learning outcomes. In addition, the system can be set up to allow students to re-try the problem if they got it wrong. This has benefits both in terms of student motivation and reinforcing learning. Through the use of a quasi-experiment, this research project measured and analysed the effects of using WeBWorK exercises in the Quantitative Methods course at Vanier College. Three specific research questions were addressed. First, we looked at whether students who did the WeBWorK exercises got better grades than students who did written exercises. Second, we looked at whether students who completed more of the WeBWorK exercises got better grades than students who completed fewer of the WeBWorK exercises. Finally, we used a self-report survey to find out what students’ perceptions and opinions were of the WeBWorK and the written exercises. For the first research question, a crossover design was used in order to compare whether the group that did WeBWorK problems during one unit would score significantly higher on that unit test than the other group that did the written problems. We found no significant difference in grades between students who did the WeBWorK exercises and students who did the written exercises. The second research question looked at whether students who completed more of the WeBWorK exercises would get significantly higher grades than students who completed fewer of the WeBWorK exercises. The straight-line relationship between number of WeBWorK exercises completed and grades was positive in both groups. However, the correlation coefficients for these two variables showed no real pattern. Our third research question was investigated by using a survey to elicit students’ perceptions and opinions regarding the WeBWorK and written exercises. Students reported no difference in the amount of effort put into completing each type of exercise. Students were also asked to rate each type of exercise along six dimensions and a composite score was calculated. Overall, students gave a significantly higher score to the written exercises, and reported that they found the written exercises were better for understanding the basic statistical concepts and for learning the basic statistical methods. However, when presented with the choice of having only written or only WeBWorK exercises, slightly more students preferred or strongly preferred having only WeBWorK exercises. The results of this research suggest that the advantages of using WeBWorK to teach Quantitative Methods are variable. The WeBWorK system offers immediate feedback, which often seems to motivate students to try again if they do not have the correct answer. However, this does not necessarily translate into better performance on the written tests and on the final exam. What has been learned is that the WeBWorK system can be used by interested instructors to enhance student learning in the Quantitative Methods course. Further research may examine more specifically how this system can be used more effectively.
Resumo:
This research is part of the field of organizational studies, focusing on organizational purchase behavior and, specifically, trust interorganizational at the purchases. This topic is current and relevant by addressing the development of good relations between buyer-supplier that increases the exchange of information, increases the length of relationship, reduces the hierarchical controls and improves performance. Furthermore, although there is a vast literature on trust, the scientific work that deal specifically at the trust interorganizational still need further research to synthesize and validate the variables that generate this phenomenon. In this sense, this investigation is to explain the antecedents of trust interorganizational by the relationship between the variable operational performance, organizational characteristics, shared values and interpersonal relationships on purchases by manufacturing industries, in order to develop a robust literature, most consensual, that includes the current sociological and economic, considering the effect of interpersonal relationships in this phenomenon. This proposal is configured in a new vision of the antecedents of interorganizational trust, described as significant quantitative from models Morgan and Hunt (1994), Doney and Cannon (1997), Zhao and Cavusgil (2006) and Nyaga, Whipple, Lynch (2011), as well as qualitative analysis of Tacconi et al. (2011). With regard to methodological aspects, the study assumes the form of a descriptive, survey type, and causal trace theoretical and empirical. As for his nature, the investigation, explicative character, has developed a quantitative approach with the use of exploratory factor analysis and structural equation modeling SEM, with the use of IBM software SPSS Amos 18.0, using the method of maximum verisimilitude, and supported by technical bootstraping. The unit of analysis was the buyer-supplier relationship, in which the object under investigation was the supplier organization in view of the purchasing company. 237 valid questionnaires were collected among key informants, using a simple random sampling developed in manufacturing industries (SIC 10-33), located in the city of Natal and in the region of Natal. The first results of descriptive analysis demonstrate the phenomenon of interorganizational trust, in which purchasing firms believe, feel secure about the supplier. This demonstration showed high levels of intensity, predominantly among the vendors that supply the company with materials that are used directly in the production process. The exploratory and confirmatory factor analysis, performed on each variable alone, generated a set of observable and unobservable variables more consistent, giving rise to a model, that needed to be further specified. This again specify model consists of trajectories was positive, with a good fit, with a composite reliability and variance extracted satisfactory, and demonstrates convergent and discriminant validity, in which the factor loadings are significant and strong explanatory power. Given the findings that reinforce the model again specify data, suggesting a high probability that this model may be more suited for the study population, the results support the explanation that interorganizational trust depends on purchases directly from interpersonal relationships, sharing value and operating performance and indirectly of personal relationships, social networks, organizational characteristics, physical and relational aspect of performance. It is concluded that this trust can be explained by a set of interactions between these three determinants, where the focus is on interpersonal relationships, with the largest path coefficient for the factor under study
Resumo:
Background and Purpose—High blood pressure (BP) is common in acute ischemic stroke and associated independently with a poor functional outcome. However, the management of BP acutely remains unclear because no large trials have been completed. Methods—The factorial PRoFESS secondary stroke prevention trial assessed BP-lowering and antiplatelet strategies in 20 332 patients; 1360 were enrolled within 72 hours of ischemic stroke, with telmisartan (angiotensin receptor antagonist, 80 mg/d, n647) vs placebo (n713). For this nonprespecified subgroup analysis, the primary outcome was functional outcome at 30 days; secondary outcomes included death, recurrence, and hemodynamic measures at up to 90 days. Analyses were adjusted for baseline prognostic variables and antiplatelet assignment. Results—Patients were representative of the whole trial (age 67 years, male 65%, baseline BP 147/84 mm Hg, small artery disease 60%, NIHSS 3) and baseline variables were similar between treatment groups. The mean time from stroke to recruitment was 58 hours. Combined death or dependency (modified Rankin scale: OR, 1.03; 95% CI, 0.84–1.26; P0.81; death: OR, 1.05; 95% CI, 0.27–4.04; and stroke recurrence: OR, 1.40; 95% CI, 0.68–2.89; P0.36) did not differ between the treatment groups. In comparison with placebo, telmisartan lowered BP (141/82 vs 135/78 mmHg, difference 6 to 7 mmHg and 2 to 4 mmHg; P0.001), pulse pressure (3 to 4 mmHg; P0.002), and rate-pressure product (466 mmHg.bpm; P0.0004). Conclusion—Treatment with telmisartan in 1360 patients with acute mild ischemic stroke and mildly elevated BP appeared to be safe with no excess in adverse events, was not associated with a significant effect on functional dependency, death, or recurrence, and modestly lowered BP.
Resumo:
Freeze drying technology can give good quality attributes of vegetables and fruits in terms of color, nutrition, volume, rehydration kinetics, stability during storage, among others, when compared with solely air dried ones. However, published scientific works showed that treatments applied before and after air dehydration are effective in food attributes, improving its quality. Therefore, the hypothesis of the present thesis was focus in a vast research of scientific work that showed the possibility to apply a pre-treatment and a post-treatment to food products combined with conventional air drying aiming being close, or even better, to the quality that a freeze dried product can give. Such attributes are the enzymatic inactivation, stability during storage, drying and rehydration kinetics, color, nutrition, volume and texture/structure. With regard to pre-treatments, the ones studied along the present work were: water blanching, steam blanching, ultrasound, freezing, high pressure and osmotic dehydration. High electric pulsed field was also studied but the food attributes were not explained on detailed. Basically, water and steam blanching showed to be adequate to inactivate enzymes in order to prevent enzymatic browning and preserve the product quality during long storage periods. With regard to ultrasound pre-treatment the published results pointed that ultrasound is an effective pre-treatment to reduce further drying times, improve rehydration kinetics and color retention. On the other hand, studies showed that ultrasound allow sugars losses and, in some cases, can lead to cell disruption. For freezing pre-treatment an overall conclusion was difficult to draw for some food attributes, since, each fruit or vegetable is unique and freezing comprises a lot of variables. However, for the studied cases, freezing showed to be a pre-treatment able to enhance rehydration kinetics and color attributes. High pressure pre-treatment showed to inactivate enzymes improving storage stability of food and showed to have a positive performance in terms of rehydration. For other attributes, when high pressure technology was applied, the literature showed divergent results according with the crops used. Finally, osmotic dehydration has been widely used in food processing to incorporate a desired salt or sugar present in aqueous solution into the cellular structure of food matrix (improvement of nutrition attribute). Moreover, osmotic dehydration lead to shorter drying times and the impregnation of solutes during osmose allow cellular strengthens of food. In case of post-treatments, puffing and a new technology denominated as instant controlled pressure drop (DIC) were reported in the literature as treatments able to improve diverse Abstract Effect of Pre-treatments and Post-treatments on Drying Products x food attributes. Basically, both technologies are similar where the product is submitted to a high pressure step and the process can make use of different heating mediums such as CO2, steam, air and N2. However, there exist a significant difference related with the final stage of both which can comprise the quality of the final product. On the other hand, puffing and DIC are used to expand cellular tissues improving the volume of food samples, helping in rehydration kinetics as posterior procedure, among others. The effectiveness of such pre and/or post-treatments is dependent on the state of the vegetables and fruits used which are also dependent of its cellular structure, variety, origin, state (fresh, ripe, raw), harvesting conditions, etc. In conclusion, as it was seen in the open literature, the application of pre-treatments and post-treatments coupled with a conventional air dehydration aim to give dehydrated food products with similar quality of freeze dried ones. Along the present Master thesis the experimental data was removed due to confidential reasons of the company Unilever R&D Vlaardingen
Resumo:
The debate about the relationship between social capital the welfare state has produced contradictory results for a long time. The crowding out hypothesis states that the growth of the welfare state would erode social capital, as the action of the state leave no room for non-regulated spontaneous cooperation. In sharp contrast, the crowding in hypothesis states that there is virtuous circle between the size of the welfare state and the stock of social capital in a particular country, since generous welfare states (specially those relying on universalistic programs) will produce a particular sense of fairness and solidarity toward fellow citizens. Yet, the empirical evidence testing the explanatory power of these theories is mostly inconclusive. To further our knowledge of this puzzle, in this paper I focus specifically on the relationship between social trust and preferences for redistribution at the individual level in a sample of European countries belonging to different welfare state regimes.
Resumo:
Students with specific learning disabilities (SLD) typically learn less history content than their peers without disabilities and show fewer learning gains. Even when they are provided with the same instructional strategies, many students with SLD struggle to grasp complex historical concepts and content area vocabulary. Many strategies involving technology have been used in the past to enhance learning for students with SLD in history classrooms. However, very few studies have explored the effectiveness of emerging mobile technology in K-12 history classrooms. ^ This study investigated the effects of mobile devices (iPads) as an active student response (ASR) system on the acquisition of U.S. history content of middle school students with SLD. An alternating treatments single subject design was used to compare the effects of two interventions. There were two conditions and a series of pretest probesin this study. The conditions were: (a) direct instruction and studying from handwritten notes using the interactive notebook strategy and (b) direct instruction and studying using the Quizlet App on the iPad. There were three dependent variables in this study: (a) percent correct on tests, (b) rate of correct responses per minute, and (c) rate of errors per minute. ^ A comparative analysis suggested that both interventions (studying from interactive notes and studying using Quizlet on the iPad) had varying degrees of effectiveness in increasing the learning gains of students with SLD. In most cases, both interventions were equally effective. During both interventions, all of the participants increased their percentage correct and increased their rate of correct responses. Most of the participants decreased their rate of errors. ^ The results of this study suggest that teachers of students with SLD should consider a post lesson review in the form of mobile devices as an ASR system or studying from handwritten notes paired with existing evidence-based practices to facilitate students’ knowledge in U.S. history. Future research should focus on the use of other interactive applications on various mobile operating platforms, on other social studies subjects, and should explore various testing formats such as oral question-answer and multiple choice. ^
Resumo:
Purpose: to determine whether pupil dilation affects biometric measurements and intraocular lens (IOL) power calculation made using the new swept-source optical coherence tomography-based optical biometer (IOLMaster 700©; Carl Zeiss Meditec, Jena, Germany). Procedures: eighty-one eyes of 81 patients evaluated for cataract surgery were prospectively examined using the IOLMaster 700© before and after pupil dilation with tropicamide 1%. The measurements made were: axial length (AL), central corneal thickness (CCT), aqueous chamber depth (ACD), lens thickness (LT), mean keratometry (MK), white-to-white distance (WTW) and pupil diameter (PD). Holladay II and SRK/T formulas were used to calculate IOL power. Agreement between measurement modes (with and without dilation) was assessed through intraclass correlation coefficients (ICC) and Bland-Altman plots. Results: mean patient age was 75.17 ± 7.54 years (range: 57–92). Of the variables determined, CCT, ACD, LT and WTW varied significantly according to pupil dilation. Excellent intraobserver correlation was observed between measurements made before and after pupil dilation. Mean IOL power calculation using the Holladay 2 and SRK/T formulas were unmodified by pupil dilation. Conclusions: the use of pupil dilation produces statistical yet not clinically significant differences in some IOLMaster 700© measurements. However, it does not affect mean IOL power calculation.