861 resultados para History, Ancient--Study and teaching (Higher)--Massachusetts--Cambridge--18th century
Resumo:
The purpose of this Master s thesis is on one hand to find out how CLIL (Content and Language Integrated Learning) teachers and English teachers perceive English and its use in teaching, and on the other hand, what they consider important in subject teacher education in English that is being planned and piloted in STEP Project at the University of Helsinki Department of Teacher Education. One research question is also what kind of language requirements teachers think CLIL teachers should have. The research results are viewed in light of previous research and literature on CLIL education. Six teachers participate in this study. Two of them are English teachers in the comprehensive school, two are class teachers in bilingual elementary education, and two are subject teachers in bilingual education, one of whom teaches in a lower secondary school and the other in an upper secondary school. One English teacher and one bilingual class teacher have graduated from a pilot class teacher program in English that started at the University of Helsinki in the middle of the 1990 s. The bilingual subject teachers are not trained in English but they have learned English elsewhere, which is a particular focus of interest in this study because it is expected that a great number of CLIL teachers in Finland do not have actual studies in English philology. The research method is interview and this is a qualitative case study. The interviews are recorded and transcribed for the ease of analysis. The English teachers do not always use English in their lessons and they would not feel confident in teaching another subject completely in English. All of the CLIL teachers trust their English skills in teaching, but the bilingual class teachers also use Finnish during lessons either because some teaching material is in Finnish, or they feel that rules and instructions are understood better in mother tongue or students English skills are not strong enough. One of the bilingual subject teachers is the only one who consciously uses only English in teaching and in discussions with students. Although teachers good English skills are generally considered important, only the teachers who have graduated from the class teacher education in English consider it important that CLIL teachers would have studies in English philology. Regarding the subject teacher education program in English, the respondents hope that its teachers will have strong enough English skills and that it will deliver what it promises. Having student teachers of different subjects studying together is considered beneficial. The results of the study show that acquiring teaching material in English continues to be the teachers own responsibility and a huge burden for the teachers, and there has, in fact, not been much progress in the matter since the beginning of CLIL education. The bilingual subject teachers think, however, that using one s own material can give new inspiration to teaching and enable the use of various pedagogical methods. Although it is questionable if the language competence requirements set for CLIL teachers by the Finnish Ministry of Education are not adhered to, it becomes apparent in the study that studies in English philology do not necessarily guarantee strong enough language skills for CLIL teaching, but teachers own personality and self-confidence have significance. Keywords: CLIL, bilingual education, English, subject teacher training, subject teacher education in English, STEP
Resumo:
Elkhorn Slough was first exposed to direct tidal forcing from the waters of Monterey Bay with the construction of Moss Landing Harbor in 1946. Elkhorn Slough is located mid-way between Santa Cruz and Monterey close to the head of Monterey Submarine Canyon. It follows a 10 km circuitous path inland from its entrance at Moss Landing Harbor. Today, Elkhorn Slough is a habitat and sanctuary for a wide variety of marine mammals, fish, and seabirds. The Slough also serves as a sink and pathway for various nutrients and pollutants. These attributes are directly or indirectly affected by its circulation and physical properties. Currents, tides and physical properties of Elkhorn Slough have been observed on an irregular basis since 1970. Based on these observations, the physical characteristics of Elkhorn Slough are examined and summarized. Elkhorn Slough is an ebb-dominated estuary and, as a result, the rise and fall of the tides is asymmetric. The fact that lower low water always follows higher high water and the tidal asymmetry produces ebb currents that are stronger than flooding currents. The presence of extensive mud flats and Salicornia marsh contribute to tidal distortion. Tidal distortion also produces several shallow water constituents including the M3, M4, and M6 overtides and the 2MK3 and MK3 compound tides. Tidal elevations and currents are approximately in quadrature; thus, the tides in Elkhorn Slough have some of the characters of a standing wave system. The temperature and salinity of lower Elkhorn Slough waters reflect, to a large extent, the influence of Monterey Bay waters, whereas the temperature and salinity of the waters of the upper Slough (>5 km from the mouth) are more sensitive to local processes. During the summer, temperature and salinity are higher in the upper slough due to local heating and evaporation. Maximum tidal currents in Elkhorn Slough have increased from approximately 75 to 120 cm/s over the past 30 years. This increase in current speed is primarily due to the change in tidal prism which has increased from approximately 2.5 to 6.2 x 106 m3 between 1956 and 1993. The increase in tidal prism is the result of both 3 rapid man-made changes to the Slough, and the continuing process of tidal erosion. Because of the increase in the tidal prism, the currents in Elkhorn Slough exhibit positive feedback, a process with uncertain consequences. [PDF contains 55 pages]
Resumo:
The northern quahog, Mercenaria mercenaria, ranges along the Atlantic Coast of North America from the Canadian Maritimes to Florida, while the southern quahog, M. campechiensis, ranges mostly from Florida to southern Mexico. The northern quahog was fished by native North Americans during prehistoric periods. They used the meats as food and the shells as scrapers and as utensils. The European colonists copied the Indians treading method, and they also used short rakes for harvesting quahogs. The Indians of southern New England and Long Island, N.Y., made wampum from quahog shells, used it for ornaments and sold it to the colonists, who, in turn, traded it to other Indians for furs. During the late 1600’s, 1700’s, and 1800’s, wampum was made in small factories for eventual trading with Indians farther west for furs. The quahoging industry has provided people in many coastal communities with a means of earning a livelihood and has given consumers a tasty, wholesome food whether eaten raw, steamed, cooked in chowders, or as stuffed quahogs. More than a dozen methods and types of gear have been used in the last two centuries for harvesting quahogs. They include treading and using various types of rakes and dredges, both of which have undergone continuous improvements in design. Modern dredges are equipped with hydraulic jets and one type has an escalator to bring the quahogs continuously to the boats. In the early 1900’s, most provinces and states established regulations to conserve and maximize yields of their quahog stocks. They include a minimum size, now almost universally a 38-mm shell width, and can include gear limitations and daily quotas. The United States produces far more quahogs than either Canada or Mexico. The leading producer in Canada is Prince Edward Island. In the United States, New York, New Jersey, and Rhode Island lead in quahog production in the north, while Virginia and North Carolina lead in the south. Connecticut and Florida were large producers in the 1990’s. The State of Tabasco leads in Mexican production. In the northeastern United States, the bays with large openings, and thus large exchanges of bay waters with ocean waters, have much larger stocks of quahogs and fisheries than bays with small openings and water exchanges. Quahog stocks in certified beds have been enhanced by transplanting stocks to them from stocks in uncertified waters and by planting seed grown in hatcheries, which grew in number from Massachusetts to Florida in the 1980’s and 1990’s.
Resumo:
Yellowfin sole, Pleuronectes asper, is the second most abundant flatfish in the North Pacific Ocean and is most highly concentrated in the eastern Bering Sea. It has been a target species in the eastern Bering Sea since the mid-1950's, initially by foreign distant-water fisheries but more recently by U.S. fisheries. Annual commercial catches since 1959 have ranged from 42,000 to 554,000 metric tons (t). Yellowfin sole is a relatively small flatfish averaging about 26 cm in length and 200 g in weight in commercial catches. It is distributed from nearshore waters to depths of about 100 m in the eastern Bering Sea in summer, but moves to deeper water in winter to escape sea ice. Yellowfin sole is a benthopelagic feeder. It is a longlived species (>20 years) with a correspondingly low natural mortality rate estimated at 0.12. After being overexploited during the early years of the fishery and suffering a substantial decline in stock abundance, the resource has recovered and is currently in excellent condition. The biomass during the 1980's may have been as high as, if not higher than, that at the beginning of the fishery. Based on results of demersal trawl surveys and two age structured models, the current exploitable biomass has been estimated to range between 1.9 and 2.6 million t. Appropriate harvest strategies were investigated under a range of possible recruitment levels. The recommended harvest level was calculated by multiplying the yield derived from the FOI harvest level (161 g at F = 0.14) hy an average recruitment value resulting in a commercial harvest of 276,900 t, or about 14% of the estimated exploitable biomass.
Resumo:
The Raman back scattering/channeling technique was used to analyze the damage recovery at different annealing temperatures and to determine the lattice location of the Er-implanted GaN samples. A better damage recovery was observed with increasing annealing temperature below 1000degreesC, but a complete recovery of the implantation damage cannot be achieved. For a sample annealed for at 900degreesC 30 min the Er and Ga angular scans across the <0001> axis was measured indicating that about 76% of Er ions occupies substitutional sites. Moreover, the photoluminscence (PL) properties of Er-implanted GaN thin films have been also studied. The experimental results indicate that those samples annealed at a higher temperature below 1000degreesC had a stronger 1539nm PL intensity. The thermal quenching of PL intensity for samples annealed at 900degreesC measured at temperatures from 15K to 300K is 30%.
Resumo:
Grattan, J.P., Al-Saad, Z., Gilbertson, D.D., Karaki, L.O., Pyatt, F.B 2005 Analyses of patterns of copper and lead mineralisation in human skeletons excavated from an ancient mining and smelting centre in the Jordanian desert Mineralogical Magazine. 69(5) 653-666.
Resumo:
http://www.archive.org/details/theislandempire00robiuoft
Resumo:
The use of games technology in education is not a new phenomenon. Even back in the days of 286 processors, PCs were used in some schools along with (what looks like now) primitive simulation software to teach a range of different skills and techniques – from basic programming using Logo (the turtle style car with a pen at the back that could be used to draw on the floor – always a good way of attracting the attention of school kids!) up to quite sophisticated replications of physical problems, such as working out the trajectory of a missile to blow up an enemies’ tank. So why are games not more widely used in education (especially in FE and HE)? Can they help to support learners even at this advanced stage in their education? We aim to provide in this article an overview of the use of game technologies in education (almost as a small literature review for interested parties) and then go more in depth into one particular example we aim to introduce from this coming academic year (Sept. 2006) to help with teaching and assessment of one area of our Multimedia curriculum. Of course, we will not be able to fully provide the reader with data on how successful this is but we will be running a blog (http://themoviesineducation.blogspot.com/) to keep interested parties up to date with the progress of the project and to hopefully help others to set up similar solutions themselves. We will also only consider a small element of the implementation here and cover how the use of such assessment processes could be used in a broader context. The use of a game to aid learning and improve achievement is suggested because traditional methods of engagement are currently failing on some levels. By this it is meant that various parts of the production process we normally cover in our Multimedia degree are becoming difficult to monitor and continually assess.
Resumo:
In 1957, 12 years after the end of World War II, the Ministry of Education issued Circular 323 to promote the development of an element of ‘liberal studies’ in courses offered by technical and further education (FE) colleges in England. This was perceived to be in some ways a peculiar or uncharacteristic development. However, it lasted over 20 years, during which time most students on courses in FE colleges participated in what were termed General or Liberal Studies classes that complemented and/or contrasted with the technical content of their vocational programmes. By the end of the 1970s, these classes had changed in character, moving away from the concept of a ‘liberal education’ towards a prescribed diet of ‘communication studies’. The steady decline in apprenticeship numbers from the late 1960s onwards accelerated in the late 1970s, resulting in a new type of student (the state-funded ‘trainee’) into colleges whose curriculum would be prescribed by the Manpower Services Commission. This paper examines the Ministry’s thinking and charts the rise and fall of a curriculum phenomenon that became immortalised in the ‘Wilt’ novels of Tom Sharpe. The paper argues that the Ministry of Education’s concerns half a century ago are still relevant now, particularly as fresh calls are being made to raise the leaving age from compulsory education to 18, and in light of attempts in England to develop new vocational diplomas for full-time students in schools and colleges.
Resumo:
Vestimentiferan tube worms living at deep-sea hydrothermal vents and cold seeps have been considered as a clade with a long and continuing evolutionary history in these ecosystems. Whereas the fossil record appears to support this view, molecular age estimates do not. The two main features that are used to identify vestimentiferan tubes in the fossil record are longitudinal ridges on the tube's surface and a tube wall constructed of multiple layers. It is shown here that chaetopterid tubes from modern vents and seeps—as well as a number of fossil tubes from shallow-water environments—also show these two features. This calls for a more cautious interpretation of tubular fossils from ancient vent and seep deposits. We suggest that: current estimates for a relatively young evolutionary age based on molecular clock methods may be more reliable than the inferences of ancient “vestimentiferans” based on putative fossils of these worms; not all of these putative fossils actually belong to this group; and that tubes from fossil seeps should be investigated for chitinous remains to substantiate claims of their potential siboglinid affinities.
Resumo:
This paper explores the school experiences of seven 11–14 year old disabled children, and focuses on their agency as they negotiated a complex, changing, and often challenging social world at school where “difference” was experienced in negative ways. The paper draws on ethnographic data from a wider three-year study that explores the influence of school experiences on both disabled and non-disabled children’s identity as they make the transition from primary to secondary school in regular New Zealand schools (although the focus of the present paper is only on the experiences of disabled children). The wider study considers how Maori (indigenous people of Aotearoa/New Zealand) and Pakeha (New Zealanders of NZ European descent) disabled children and their non- disabled matched peers (matched for age, gender and classroom) understand their personal identity, and how factors relating to transition (from primary to secondary school); culture; impairment (in the case of disabled children); social relationships; and school experience impact on children’s identities. Data on Maori children’s school experiences is currently being collected, and is not yet available for inclusion in this paper. On the basis of our observations in schools we will illustrate how disabled children felt and were made to feel different through an array of structural barriers such as separate provision for disabled students, and peer and teacher attitudes to diversity. However, we agree with Davis, Watson, Shakespeare and Corker’s (2003) interpretation that disabled children’s rights and participation at school are also under attack from a “deeper cultural division” (p. 205) in schools based on discourses of difference and normality. While disabled students in our study were trying to actively construct and shape their social and educational worlds, our data also show that teachers and peers have the capacity to either support or supplant these attempts to be part of the group of “all children”. We suggest that finding solutions that support disabled children’s full inclusion and participation at school requires a multi-faceted and systemic approach focused on a pedagogy for diverse learners, and on a consistent and explicitly inclusive policy framework centred on children’s rights.
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.
Resumo:
The kinetics of hydrodeoxygenation of waste cooking oil (WCO) is investigated with unsupported CoMoS catalysts. A kinetic model is established and a comprehensive analysis of each reaction pathway is carried out. The results show that hydrodecarbonylation/decarboxylation (HDC) routes are the predominant reaction pathways in the elimination of oxygen, with the rate constant three times as high as that of hydrodeoxygenation (HDO). However, the HDC activity of the CoMoS catalyst deactivates due to gradual loss of sulfur from the catalyst. HDO process is insensitive to the sulfur deficiency. The kinetic modeling shows that direct hydrodecarbonylation of fatty acids dominates the HDC routes and, in the HDO route, fatty acids are transferred to aldehydes/alcohols and then to C-18 hydrocarbons, a final product, and the reduction of acids is the rate limiting step. The HDO route via alcohols is dominant over aldehydes due to a significantly higher reaction rate constant. The difference of C-18/C-17 ratio in unsupported and supported catalysts show that a support with Lewis acid sites may play an important role in the selectivity for the hydrodeoxygenation pathways and promoting the final product quality