20 resultados para computers and literacy

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The attitudes of 328 British Secondary School children towards computers were examined in a cross-sectional survey. Measures of both general attitudes towards computers and affective reactions towards working with computers were examined in relation to the sex of the subject, courses studied (computer related/noncomputer related) and availability of a home computer. A differential pattern of results was observed. With respect to general attitudes towards computers, main effects were found for all three independent variables indicating that more favourable attitudes increased as a function of being male, doing computer courses and having a home computer. In contrast to this, affective reactions to working with computers was primarily related to doing computer courses, such that those doing computer courses reported more positive and less negative reactions. The practical and theoretical implications of these results are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. © 2014 John Wiley & Sons Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Debates about the nature of literacy and literacy practices have been conducted extensively in the last fifteen years or so. The fact that both previous and current British governments have effectively suppressed any real debate makes the publication of this book both timely and important. Here, Urszula Clark stresses the underlying ideological character of such debates and shows that they have deep historical roots. She also makes the point that issues regarding the relationship between language and identity, especially national identity, become sharply focused at times of crisis in that identity. By undertaking a comparison with other major English-speaking countries, most notably Australia, New Zealand and the USA, Clark shows how these times of crisis reverberate around the globe.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a new technique in the investigation of limited-dependent variable models. This paper illustrates that variable precision rough set theory (VPRS), allied with the use of a modern method of classification, or discretisation of data, can out-perform the more standard approaches that are employed in economics, such as a probit model. These approaches and certain inductive decision tree methods are compared (through a Monte Carlo simulation approach) in the analysis of the decisions reached by the UK Monopolies and Mergers Committee. We show that, particularly in small samples, the VPRS model can improve on more traditional models, both in-sample, and particularly in out-of-sample prediction. A similar improvement in out-of-sample prediction over the decision tree methods is also shown.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores the use of the optimisation procedures in SAS/OR software with application to the measurement of efficiency and productivity of decision-making units (DMUs) using data envelopment analysis (DEA) techniques. DEA was originally introduced by Charnes et al. [J. Oper. Res. 2 (1978) 429] is a linear programming method for assessing the efficiency and productivity of DMUs. Over the last two decades, DEA has gained considerable attention as a managerial tool for measuring performance of organisations and it has widely been used for assessing the efficiency of public and private sectors such as banks, airlines, hospitals, universities and manufactures. As a result, new applications with more variables and more complicated models are being introduced. Further to successive development of DEA a non-parametric productivity measure, Malmquist index, has been introduced by Fare et al. [J. Prod. Anal. 3 (1992) 85]. Employing Malmquist index, productivity growth can be decomposed into technical change and efficiency change. On the other hand, the SAS is a powerful software and it is capable of running various optimisation problems such as linear programming with all types of constraints. To facilitate the use of DEA and Malmquist index by SAS users, a SAS/MALM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear-programming models based on the selected DEA. An example is given to illustrate how one could use the code to measure the efficiency and productivity of organisations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study aimed firstly to investigate current patterns of language use amongst young bilinguals in Birmingham and secondly to examine the relationship between this language use and educational achievement. The research then focussed on various practices, customs and attitudes which would favour the attrition or survival of minority languages in the British situation. The data necessary to address this question was provided by a sample of three hundred and seventy-four 16-19 year olds, studying in Birmingham schools and colleges during the period 1987-1990 and drawn from the main linguistic minority communities in Birmingham. The research methods chosen were both quantitative and qualitative. The study found evidence of ethnolinguistic vitality amongst many of the linguistic minority communities in Birmingham: a number of practices and a range of attitudes indicate that linguistic diversity may continue and that a stable diglossic situation may develop in some instances, particularly where demographical and religious factors lead to closeness of association. Where language attrition is occurring it is often because of the move from a less prestigious minority language or dialect to a more prestigious minority language in addition to pressures from English. The educational experience of the sample indicates that literacy and formal language study are of key importance if personal bilingualism is to be experienced as an asset; high levels of oral proficiency in the L1 and L2 do not, on their own, necessarily correlate with positive educational benefit. The intervening variable associated with educational achievement appears to be the formal language learning process and literacy. A number of attitudes and practices, including the very close associations maintained with some of the countries of origin of the families, were seen to aid or hinder first language maintenance and second language acquisition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present study describes a pragmatic approach to the implementation of production planning and scheduling techniques in foundries of all types and looks at the use of `state-of-the-art' management control and information systems. Following a review of systems for the classification of manufacturing companies, a definitive statement is made which highlights the important differences between foundries (i.e. `component makers') and other manufacturing companies (i.e. `component buyers'). An investigation of the manual procedures which are used to plan and control the manufacture of components reveals the inherent problems facing foundry production management staff, which suggests the unsuitability of many manufacturing techniques which have been applied to general engineering companies. From the literature it was discovered that computer-assisted systems are required which are primarily `information-based' rather than `decision based', whilst the availability of low-cost computers and `packaged-software' has enabled foundries to `get their feet wet' without the financial penalties which characterized many of the early attempts at computer-assistance (i.e. pre-1980). Moreover, no evidence of a single methodology for foundry scheduling emerged from the review. A philosophy for the development of a CAPM system is presented, which details the essential information requirements and puts forward proposals for the subsequent interactions between types of information and the sub-system of CAPM which they support. The work developed was oriented specifically at the functions of production planning and scheduling and introduces the concept of `manual interaction' for effective scheduling. The techniques developed were designed to use the information which is readily available in foundries and were found to be practically successful following the implementation of the techniques into a wide variety of foundries. The limitations of the techniques developed are subsequently discussed within the wider issues which form a CAPM system, prior to a presentation of the conclusions which can be drawn from the study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Studied the attitudes of shopfloor employees toward AMT as a function of experience with working with AMT, skill level, and job involvement. Survey data were collected from 115 employees of a large microelectronics company in England. Four job types were identified, which differed in terms of mode of work (manual/AMT) and skill level (low/high). Results show that those who worked with computers had more favorable attitudes toward AMT than those who did not. Results support A. Rafaeli's (see record 1986-20891-001) finding that the most favorable attitudes toward AMT were held by those who worked with computers and had high job involvement. Skill level had no significant effects on Ss' attitudes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim: Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method: Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results: In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation: The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals. © 2013 The Authors. Developmental Medicine & Child Neurology published by John Wiley & Sons Ltd on behalf of Mac Keith Press.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In less than a decade, personal computers have become part of our daily lives. Many of us come into contact with computers every day, whether at work, school or home. As useful as the new technologies are, they also have a darker side. By making computers part of our daily lives, we run the risk of allowing thieves, swindlers, and all kinds of deviants directly into our homes. Armed with a personal computer, a modem and just a little knowledge, a thief can easily access confidential information, such as details of bank accounts and credit cards. This book helps people avoid harm at the hands of Internet criminals. It offers a tour of the more dangerous parts of the Internet, as the author explains who the predators are, their motivations, how they operate and how to protect against them. In less than a decade, personal computers have become part of our daily lives. Many of us come into contact with computers every day, whether at work, school or home. As useful as the new technologies are, they also have a darker side. By making computers part of our daily lives, we run the risk of allowing thieves, swindlers, and all kinds of deviants directly into our homes. Armed with a personal computer, a modem and just a little knowledge, a thief can easily access confidential information, such as details of bank accounts and credit cards. This book is intended to help people avoid harm at the hands of Internet criminals. It offers a tour of the more dangerous parts of the Internet, as the author explains who the predators are, their motivations, how they operate and how to protect against them. Behind the doors of our own homes, we assume we are safe from predators, con artists, and other criminals wishing us harm. But the proliferation of personal computers and the growth of the Internet have invited these unsavory types right into our family rooms. With a little psychological knowledge a con man can start to manipulate us in different ways. A terrorist can recruit new members and raise money over the Internet. Identity thieves can gather personal information and exploit it for criminal purposes. Spammers can wreak havoc on businesses and individuals. Here, an expert helps readers recognize the signs of a would-be criminal in their midst. Focusing on the perpetrators, the author provides information about how they operate, why they do it, what they hope to do, and how to protect yourself from becoming a victim.