861 resultados para multiple data sources
Resumo:
This paper explores issues of teaching and learning Chinese as a heritage language in a Chinese heritage language school, the Zhonguo Saturday School, in Montreal, Quebec. With a student population of more than 1000, this school is the largest of the eight Chinese Heritage Language schools in Montreal. Students participating in this study were from seven different classes (grade K, two, three, four, five, six, and special class), their ages ranging from 4 to 13 years. The study took place over a period of two years between 2000 and 2002. Focusing on primary level classroom discourse and drawing on the works of Vygotsky and Bakhtin, I examine how teachers and students use language to communicate, and how their communication mediates teaching, learning and heritage language acquisition. Data sources include classroom observations, interviews with students and their teachers, students’ writings, and video and audio taping of classroom activities. Implications for heritage language development and maintenance are discussed with reference to the findings of this study.
Resumo:
COCO-2 is a model for assessing the potential economic costs likely to arise off-site following an accident at a nuclear reactor. COCO-2 builds on work presented in the model COCO-1 developed in 1991 by considering economic effects in more detail, and by including more sources of loss. Of particular note are: the consideration of the directly affected local economy, indirect losses that stem from the directly affected businesses, losses due to changes in tourism consumption, integration with the large body of work on recovery after an accident and a more systematic approach to health costs. The work, where possible, is based on official data sources for reasons of traceability, maintenance and ease of future development. This report describes the methodology and discusses the results of an example calculation. Guidance on how the base economic data can be updated in the future is also provided.
Resumo:
The Arctic is an important region in the study of climate change, but monitoring surface temperatures in this region is challenging, particularly in areas covered by sea ice. Here in situ, satellite and reanalysis data were utilised to investigate whether global warming over recent decades could be better estimated by changing the way the Arctic is treated in calculating global mean temperature. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques. Kriging techniques provided the smallest errors in anomaly estimates. Similar accuracies were found for anomalies estimated from in situ meteorological station SAT records using a kriging technique. Whether additional data sources, which are not currently utilised in temperature anomaly datasets, would improve estimates of Arctic surface air temperature anomalies was investigated within the reanalysis testbed and using in situ data. For the reanalysis study, the additional input anomalies were reanalysis data sampled at certain supplementary data source locations over Arctic land and sea ice areas. For the in situ data study, the additional input anomalies over sea ice were surface temperature anomalies derived from the Advanced Very High Resolution Radiometer satellite instruments. The use of additional data sources, particularly those located in the Arctic Ocean over sea ice or on islands in sparsely observed regions, can lead to substantial improvements in the accuracy of estimated anomalies. Decreases in Root Mean Square Error can be up to 0.2K for Arctic-average anomalies and more than 1K for spatially resolved anomalies. Further improvements in accuracy may be accomplished through the use of other data sources.
Resumo:
This paper focuses on the language shift phenomenon in Singapore as a consequence of the top-town policies. By looking at bilingual family language policies it examines the characteristics of Singapore’s multilingual nature and cultural diversity. Specifically, it looks at what languages are practiced and how family language policies are enacted in Singaporean English-Chinese bilingual families, and to what extend macro language policies – i.e. national and educational language policies influence and interact with family language policies. Involving 545 families and including parents and grandparents as participants, the study traces the trajectory of the policy history. Data sources include 2 parts: 1) a prescribed linguistic practices survey; and 2) participant observation of actual negotiation of FLP in face-to-face social interaction in bilingual English-Chinese families. The data provides valuable information on how family language policy is enacted and language practices are negotiated, and what linguistic practices have been changed and abandoned against the background of the Speaking Mandarin Campaign and the current bilingual policy implemented in the 1970s. Importantly, the detailed face-to-face interactions and linguistics practices are able to enhance our understanding of the subtleties and processes of language (dis)continuity in relation to policy interventions. The study also discusses the reality of language management measures in contrast to the government’s ‘separate bilingualism’ (Creese & Blackledge, 2011) expectations with regard to ‘striking a balance’ between Asian and Western culture (Curdt-Christiansen & Silver 2013; Shepherd, 2005) and between English and mother tongue languages (Curdt-Christiansen, 2014). Demonstrating how parents and children negotiate their family language policy through translanguaging or heteroglossia practices (Canagarajah, 2013; Garcia & Li Wei, 2014), this paper argues that ‘striking a balance’ as a political ideology places emphasis on discrete and separate notions of cultural and linguistic categorization and thus downplays the significant influences from historical, political and sociolinguistic contexts in which people find themselves. This simplistic view of culture and linguistic code will inevitably constrain individuals’ language expression as it regards code switching and translanguaging as delimited and incompetent language behaviour.
Resumo:
We describe the creation of a data set describing changes related to the presence of ice sheets, including ice-sheet extent and height, ice-shelf extent, and the distribution and elevation of ice-free land at the Last Glacial Maximum (LGM), which were used in LGM experiments conducted as part of the fifth phase of the Coupled Modelling Intercomparison Project (CMIP5) and the third phase of the Palaeoclimate Modelling Intercomparison Project (PMIP3). The CMIP5/PMIP3 data sets were created from reconstructions made by three different groups, which were all obtained using a model-inversion approach but differ in the assumptions used in the modelling and in the type of data used as constraints. The ice-sheet extent in the Northern Hemisphere (NH) does not vary substantially between the three individual data sources. The difference in the topography of the NH ice sheets is also moderate, and smaller than the differences between these reconstructions (and the resultant composite reconstruction) and ice-sheet reconstructions used in previous generations of PMIP. Only two of the individual reconstructions provide information for Antarctica. The discrepancy between these two reconstructions is larger than the difference for the NH ice sheets, although still less than the difference between the composite reconstruction and previous PMIP ice-sheet reconstructions. Although largely confined to the ice-covered regions, differences between the climate response to the individual LGM reconstructions extend over the North Atlantic Ocean and Northern Hemisphere continents, partly through atmospheric stationary waves. Differences between the climate response to the CMIP5/PMIP3 composite and any individual ice-sheet reconstruction are smaller than those between the CMIP5/PMIP3 composite and the ice sheet used in the last phase of PMIP (PMIP2).
Resumo:
Background Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. Methods We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m2 [underweight], 18·5 kg/m2 to <20 kg/m2, 20 kg/m2 to <25 kg/m2, 25 kg/m2 to <30 kg/m2, 30 kg/m2 to <35 kg/m2, 35 kg/m2 to <40 kg/m2, ≥40 kg/m2 [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. Findings We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m2 (95% credible interval 21·3–22·1) in 1975 to 24·2 kg/m2 (24·0–24·4) in 2014 in men, and from 22·1 kg/m2 (21·7–22·5) in 1975 to 24·4 kg/m2 (24·2–24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m2 in central Africa and south Asia to 29·2 kg/m2 (28·6–29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m2 (21·4–22·3) in south Asia to 32·2 kg/m2 (31·5–32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5–17·4) to 8·8% (7·4–10·3) in men and from 14·6% (11·6–17·9) to 9·7% (8·3–11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8–29·2) in men and 24·0% (18·9–29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4–4·1) in 1975 to 10·8% (9·7–12·0) in 2014 in men, and from 6·4% (5·1–7·8) to 14·9% (13·6–16·1) in women. 2·3% (2·0–2·7) of the world's men and 5·0% (4·4–5·6) of women were severely obese (ie, have BMI ≥35 kg/m2). Globally, prevalence of morbid obesity was 0·64% (0·46–0·86) in men and 1·6% (1·3–1·9) in women. Interpretation If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia.
Resumo:
The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs ""radio-hybrid"" measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Internet of Things är ett samlingsbegrepp för den utveckling som innebär att olika typer av enheter kan förses med sensorer och datachip som är uppkopplade mot internet. En ökad mängd data innebär en ökad förfrågan på lösningar som kan lagra, spåra, analysera och bearbeta data. Ett sätt att möta denna förfrågan är att använda sig av molnbaserade realtidsanalystjänster. Multi-tenant och single-tenant är två typer av arkitekturer för molnbaserade realtidsanalystjänster som kan användas för att lösa problemen med hanteringen av de ökade datamängderna. Dessa arkitekturer skiljer sig åt när det gäller komplexitet i utvecklingen. I detta arbete representerar Azure Stream Analytics en multi-tenant arkitektur och HDInsight/Storm representerar en single-tenant arkitektur. För att kunna göra en jämförelse av molnbaserade realtidsanalystjänster med olika arkitekturer, har vi valt att använda oss av användbarhetskriterierna: effektivitet, ändamålsenlighet och användarnöjdhet. Vi kom fram till att vi ville ha svar på följande frågor relaterade till ovannämnda tre användbarhetskriterier: • Vilka likheter och skillnader kan vi se i utvecklingstider? • Kan vi identifiera skillnader i funktionalitet? • Hur upplever utvecklare de olika analystjänsterna? Vi har använt en design and creation strategi för att utveckla två Proof of Concept prototyper och samlat in data genom att använda flera datainsamlingsmetoder. Proof of Concept prototyperna inkluderade två artefakter, en för Azure Stream Analytics och en för HDInsight/Storm. Vi utvärderade dessa genom att utföra fem olika scenarier som var för sig hade 2-5 delmål. Vi simulerade strömmande data genom att låta en applikation kontinuerligt slumpa fram data som vi analyserade med hjälp av de två realtidsanalystjänsterna. Vi har använt oss av observationer för att dokumentera hur vi arbetade med utvecklingen av analystjänsterna samt för att mäta utvecklingstider och identifiera skillnader i funktionalitet. Vi har även använt oss av frågeformulär för att ta reda på vad användare tyckte om analystjänsterna. Vi kom fram till att Azure Stream Analytics initialt var mer användbart än HDInsight/Storm men att skillnaderna minskade efter hand. Azure Stream Analytics var lättare att arbeta med vid simplare analyser medan HDInsight/Storm hade ett bredare val av funktionalitet.
Resumo:
Background In order to facilitate the collaborative design, system dynamics (SD) with a group modelling approach was used in the early stages of planning a new stroke unit. During six workshops a SD model was created in a multiprofessional group. Aim To explore to which extent and how the use of system dynamics contributed to the collaborative design process. Method A case study was conducted using several data sources. Results SD supported a collaborative design, by facilitating an explicit description of stroke care process, a dialogue and a joint understanding. The construction of the model obliged the group to conceptualise the stroke care and experimentation with the model gave the opportunity to reflect on care. Conclusion SD facilitated the collaborative design process and should be integrated in the early stages of the design process as a quality improvement tool.
Resumo:
There is a lack of research on the everyday lives of older people in developing countries. This exploratory study used structured observation and content analysis to examine the presence of older people in public fora, and considered the methods’ potential for understanding older people’s social integration and inclusion. Structured observation occurred of public social spaces in six cities each located in a different developing country, and in one city in the United Kingdom, together with content analysis of the presence of people in newspaper pictures and on television in the selected countries. Results indicated that across all fieldwork sites and data sources, there was a low presence of older people, with women considerably less present than men in developing countries. There was variation across fieldwork sites in older people’s presence by place and time of day, and in their accompanied status. The presence of older people in images drawn from newspapers was associated with the news/non-news nature of the source. The utility of the study’s methodological approach is considered, as is the degree to which the presence of older people in public fora might relate to social integration and inclusion in different cultural contexts.
Resumo:
Hydrological loss is a vital component in many hydrological models, which are usedin forecasting floods and evaluating water resources for both surface and subsurface flows. Due to the complex and random nature of the rainfall runoff process, hydrological losses are not yet fully understood. Consequently, practitioners often use representative values of the losses for design applications such as rainfall-runoff modelling which has led to inaccurate quantification of water quantities in the resulting applications. The existing hydrological loss models must be revisited and modellers should be encouraged to utilise other available data sets. This study is based on three unregulated catchments situated in Mt. Lofty Ranges of South Australia (SA). The paper focuses on conceptual models for: initial loss (IL), continuing loss (CL) and proportional loss (PL) with rainfall characteristics (total rainfall (TR) and storm duration (D)), and antecedent wetness (AW) conditions. The paper introduces two methods that can be implemented to estimate IL as a function of TR, D and AW. The IL distribution patterns and parameters for the study catchments are determined using multivariate analysis and descriptive statistics. The possibility of generalising the methods and the limitations of this are also discussed. This study will yield improvements to existing loss models and will encourage practitioners to utilise multiple data sets to estimate losses, instead of using hypothetical or representative values to generalise real situations.
Resumo:
Essa pesquisa procurou investigar o processo de composição narrativa pela dupla estagiário-terapeuta/paciente, em uma situação de psicoterapia psicanalítica, a partir do contexto de uma prática supervisionada de estágio em Psicologia Clínica. Participaram da pesquisa duas acadêmicas de Psicologia que realizaram o estágio em um abrigo municipal. O trabalho clínico desenvolvido pelas estagiárias foi acompanhado pela supervisão acadêmica, cuja responsável na época era a pesquisadora. Também participaram dessa pesquisa três meninas de seis, nove e dez anos de idade, acolhidas temporariamente na instituição e em acompanhamento psicoterapêutico pelas estagiárias. Os atendimentos foram realizados uma vez por semana, individualmente, na própria instituição. As estagiárias relataram cada entrevista preliminar realizada com as crianças sob a forma escrita de entrevista dialogada, cujo objetivo é a memorização do desenvolvimento da entrevista. Essa memorização associada às reflexões acerca do estágio produzidas no espaço de supervisão acadêmica formaram as fontes dos dados. Para atingir o objetivo dessa pesquisa, três estudos foram realizados e, em cada um deles, três casos, constituídos por diferentes duplas terapêuticas, foram analisados. Os resultados dos três estudos demonstram, inicialmente, que o discurso elaborado pelas duplas terapêuticas, em cada entrevista preliminar isoladamente, estrutura-se narrativamente porque esse discurso apresenta os dois princípios da narrativa, que são a sucessão e a transformação, como propõe Tzvetan Todorov. A análise conjunta dessas entrevistas denota, entretanto, que as narrativas constituídas nesse processo não podem ser reduzidas a uma lógica de sucessão linear como formula esse autor. A seqüência narrativa é regida pela lógica de causalidade semântica, que é de natureza polifônica, como propõe Paul Ricoeur. As intervenções das estagiárias sob a forma de construções, conforme conceito estabelecido por Freud, mesmo que guiadas pelo princípio da associação livre, são demarcadas, em sua maioria, pela repetição de uma versão já conhecida da história da vida de seu paciente, geralmente àquela que versa sobre o motivo do abrigamento. Assim, essas intervenções, cujo efeito possível seria que o paciente pudesse desconstruir os sentidos dados a priori, reconstruindo novas versões para os acontecimentos de sua vida e, com isso, ocupasse o lugar de autor de sua história, acabam insistindo no trauma. Dessa forma, fica explicitado um dos paradoxos do processo de formação da escuta clínica: o estagiário, ao procurar abrir os sentidos para o seu paciente, construindo junto com ele uma versão possível para a sua história, acaba, muitas vezes, fechando o sentido, construindo uma única versão para os eventos narrados pelo paciente.
Resumo:
Due to widespread government intervention and import-substitution industrialization, there has been a general presumption that Latin America has been much less productive than the leading economies in the last decades. In this paper, however, we show that until the late seventies Latin America had high total factor productivity (TFP) levels relative to the US and other regions. It is only after the late seventies that we observe a fast decrease of relative TFP in Latin America. Results are robust to the use of diferent methodologies and data sources.
Resumo:
Este trabalho analisa o desenvolvimento de dynamic capabilities em um contexto de turbulência institucional, diferente das condições em que esta perspectiva teórica costuma ser estudada. É feito um estudo de caso histórico e processual que analisa o surgimento das Dynamic Capabilities nos bancos brasileiros, a partir do desenvolvimento da tecnologia bancária que se deu entre os anos 1960 e 1990. Baseando-se nas proposições da Estratégia que analisam as vantagens competitivas das empresas através de seus recursos, conhecimentos e Dynamic Capabilities, é construído um framework com o qual são analisados diversos depoimentos dados ao livro “Tecnologia bancária no Brasil: uma história de conquistas, uma visão de futuro” (FONSECA; MEIRELLES; DINIZ, 2010) e em entrevistas feitas para este trabalho. Os depoimentos mostram que os bancos fizeram fortes investimentos em tecnologia a partir da reforma financeira de 1964, época em que se iniciou uma sequência de períodos com características próprias do ponto de vista institucional. Conforme as condições mudavam a cada período, os bancos também mudavam seu processo de informatização. No início, os projetos eram executados ad hoc, sob o comando direto dos líderes dos bancos. Com o tempo, à medida que a tecnologia evoluía, a infraestrutura tecnológica crescia e surgiam turbulências institucionais, os bancos progressivamente desenvolveram parcerias entre si e com fornecedores locais, descentralizaram a área de tecnologia, tornaram-se mais flexíveis, fortaleceram a governança corporativa e adotaram uma série de rotinas para cuidar da informática, o que levou ao desenvolvimento gradual das microfundações das Dynamic Capabilties nesses períodos. Em meados dos anos 1990 ocorreram a estabilização institucional e a abertura da economia à concorrência estrangeira, e assim o país colocou-se nas condições que a perspectiva teórica adotada considera ideais para que as Dynamic Capabilities sejam fontes de vantagem competitiva. Os bancos brasileiros mostraram-se preparados para enfrentar essa nova fase, o que é uma evidência de que eles haviam desenvolvido Dynamic Capabilities nas décadas precedentes, sendo que parte desse desenvolvimento podia ser atribuído às turbulências institucionais que eles haviam enfrentado.
Resumo:
A educação a distância vem ganhando destaque nos últimos anos em função da capacidade de expansão do ensino superior e também por incluir um grupo de pessoas que, de outra forma, não poderia realizar uma graduação. O objetivo dessa pesquisa é identificar as variáveis que influenciam o processo de permanência dos alunos de Administração, comparando resultados das modalidades presencial e a distância, tomando como caso a Universidade Estadual do Maranhão (Uema). Como principal referencial teórico do presente estudo, escolheu-se o Modelo de Integração do Estudante de Tinto (1975), considerado como a primeira proposta academicamente reconhecida de determinação das variáveis que intervém no processo de evasão e permanência de alunos em um curso de graduação. Um dos componentes do modelo despertou o interesse de comparação entre as duas modalidades (presencial e a distância) de forma especial, qual seja: a integração social, decorrente das interações discentes e docentes, que em geral são pouco exploradas nos estudos empíricos. O método definido para o estudo foi de caráter descritivo e base qualitativa. A submissão do conjunto de variáveis observadas na literatura a uma análise de juízes resultou na definição de três principais Dimensões e 60 Componentes, a saber: (1) Perfil do aluno; (2) Estar-em-rede; e (3) Condições para a permanência. Essas variáveis compuseram o quadro de análise de conteúdo categorial, resultante da aplicação de múltiplos instrumentos de coleta de dados (dois questionários e uma entrevista em profundidade), que foram utilizados na triangulação com dados secundários obtidos no próprio Enade (2009). Como contribuição teórica da presente tese, aponta-se a possibilidade de integração das três visões filosóficas: Cognitivismo, Construtivismo e Conectivismo que se interrelacionam na gestão acadêmica de cursos superiores pela unicidade da educação vislumbrada pelo blended learning como tendência futura para a educação superior. Outras contribuições são apresentadas em relação aos componentes: atributos individuais, trabalho e estágio, contexto familiar, escolaridade anterior, uso das TICs, interação docente e discente, compromisso pessoal e institucional e integração acadêmica.