982 resultados para PERSONAL COMPUTERS
Resumo:
This presentation describes a system for measuring claddings as an example of the many possible advantages to be obtained by applying a personal computer to eddy current testing. A theoretical model and a learning algorithm are integrated into an instrument. They are supported in the PC, and serve to simplify and enhance multiparameter testing. The PC gives additional assistance by simplifying set-up procedures and data logging etc.
Resumo:
Communal Internet access facilities or telecentres are considered a good way to provide connectivity to people who do not possess home connectivity. Attempts are underway to utilize telecentres as eLearning centres providing access to learning materials to students who would otherwise not be able to take up eLearning. This paper reports on the findings of qualitative interviews conducted with 18 undergraduate students from two Sri Lankan universities on their eLearning experiences using communal Internet access centres. The findings suggest that despite the efforts by telecentres to provide a good service to eLearners, there are various problems faced by students including: costs, logistics, scarcity of resources, connectivity speeds, excessive procedures, and lack of support. The experiences of these Sri Lankan students suggest that there is much that needs to be understood about user perspectives in using telecentres, which could help formulate better policies and strategies to support eLearners who depend on communal access facilities.
Resumo:
This paper reports on the use of email as a means to access the self-constructions of gifted young adolescents. Australian research shows that gifted young adolescents may feel more lonely and misunderstood than their same-age counterparts, yet they are seldom asked about their lives. Emerging use of online methods as a means of access to individual lives and perceptions has demonstrated the potential offered by the creation of digital texts as narrative data. Details are given of a qualitative study that engaged twelve children aged between 10 and 14 years, who were screened for giftedness, in a project involving the generation of emailed journal entries sent over a period of 6 months. With emphasis on participatory principles, individual young adolescents produced self-managed journal entries that were written and sent to the researcher from personal computers outside the school setting. Drawing from a theoretical understanding of self as constructed within dialogic relationships, the digital setting of email is proposed as a narrative space that fosters healthy self-disclosure. This paper outlines the benefits of using email as a means to explore emotions, promote reflective accounts of self and support the development of a personal language for self-expression. Individual excerpts will be presented to show that the harnessing of personal narratives within an email context has potential to yield valuable insights into the emotions, personal realities and experiences of gifted young adolescents. Findings will be presented to show that the co-construction of self-expressive and explanatory narratives supported by a facilitative adult listener promoted healthy self-awareness amongst participants. This paper contributes to appreciative conversations about using online methods as a flexible and practical avenue for conducting educational research. Furthermore, digital writing in email form will be presented as having distinct advantages over face-to-face methods when utilised with gifted young adolescents who may be unwilling to disclose information within school-based settings.
Resumo:
Historically, distance education consisted of a combination of face-to-face blocks of time and surface mailed packages. However, advances in information technology literacy and the abundance of personal computers has placed e-learning in increased demand. The authors describe the planning, implementation, and evaluation of the blending of e-learning with face-to-face education in the postgraduate nursing forum. Experiences of this particular student group are also discussed.
Resumo:
The advent of new technologies, like personal computers and networking platforms, represents a series of events that have impacted on society as a whole as well as challenging the artworld to reconsider what is 'contemporary'.
Resumo:
The draft of the first stage of the national curriculum has now been published. Its final form to be presented in December 2010 should be the centrepiece of Labor’s Educational Revolution. All the other aspects – personal computers, new school buildings, rebates for uniforms and even the MySchool report card – are marginal to the prescription of what is to be taught and learnt in schools. The seven authors in this journal’s Point and Counterpoint (Curriculum Perspectives, 30(1) 2010, pp.53-74) raise a number of both large and small issues in education as a whole, and in science education more particularly. Two of them (Groves and McGarry) make brief reference to earlier attempts to achieve national curriculum in Australia. Those writing from New Zealand and USA will be unaware of just how ambitious this project is for Australia - a bold and overdue educational adventure or a foolish political decision destined to failure, as happened in the later 1970s and the 1990s.
Resumo:
A test of the useful field of view was introduced more than two decades ago and was designed to reflect the visual difficulties that older adults experience with everyday tasks. Importantly, the useful field of view is one of the most extensively researched and promising predictor tests for a range of driving outcomes measures, including driving ability and crash risk, as well as other everyday tasks. Currently available commercial versions of the test can be administered using personal computers and measure speed of visual processing speed for rapid detection and localization of targets under conditions of divided visual attention and in the presence and absence of visual clutter. The test is believed to assess higher order cognitive abilities, but performance also relies on visual sensory function since targets must be visible in order to be attended to. The format of the useful field of view test has been modified over the years; the original version estimated the spatial extent of useful field of view, while the latest versions measures visual processing speed. While deficits in the useful field of view are associated with functional impairments in everyday activities in older adults, there is also emerging evidence from several research groups that improvements in visual processing speed can be achieved through training. These improvements have been shown to reduce crash risk, and have a positive impact on health and functional well being, with the potential to increase the mobility and hence independence of older adults.
Resumo:
This is the fourth edition of New Media: An Introduction, with the previous editions being published by Oxford University Press in 2002, 2005 and 2008. As the first edition of the book published in the 2010s, every chapter has been comprehensively revised, and there are new chapters on: • Online News and the Future of Journalism (Chapter 7) • New Media and the Transformation of Higher Education (Chapter 10) • Online Activism and Networked Politics (Chapter 12). It has retained popular features of the third edition, including the twenty key concepts in new media (Chapter 2) and illustrative case studies to assist with teaching new media. The case studies in the book cover: the global internet; Wikipedia; transmedia storytelling; Media Studies 2.0; the games industry and exploitation; video games and violence; WikiLeaks; the innovator’s dilemma; massive open online courses (MOOCs); Creative Commons; the Barack Obama Presidential campaigns; and the Arab Spring. Several major changes in the media environment since the publication of the third edition stand out. Of particular importance has been the rise of social media platforms such as Facebook, Twitter and YouTube, which draw out even more strongly the features of the internet as networked and participatory media, with a range of implications across the economy, society and culture. In addition, the political implications of new media have become more apparent with a range of social media-based political campaigns, from Barack Obama’s successful Presidential election campaigns to the Occupy movements and the Arab Spring. At the same time, the subsequent developments of politics in these and other cases has drawn attention to the limitations of thinking about the politics or the public sphere in technologically determinist ways. When the first edition of New Media was published in 2002, the concept of new media was seen as being largely about the internet as it was accessed from personal computers. The subsequent decade has seen a proliferation of platforms and devices: we now access media in all forms from our phones and other mobile platforms, therefore we seen television and the internet increasingly converging, and we see a growing uncoupling of digital media content and delivery platforms. While this has a range of implications for media law and policy, from convergent media policy to copyright reform, governments and policy-makers are struggling to adapt to such seismic shifts from mass communications media to convergent social media. The internet is no longer primarily a Western-based medium. Two-thirds of the world’s internet users are now outside of Europe and North America; three-quarters of internet users use languages other than English; and three-quarters of the world’s mobile cellular phone subscriptions are in developing nations. It is also apparent that conducting discussions about how to develop new media technologies and discussions about their cultural and creative content can no longer be separated. Discussions of broadband strategies and the knowledge economy need to be increasingly joined with those concerning the creative industries and the creative economy.
Resumo:
A new technology – 3D printing – has the potential to make radical changes to aspects of the way in which we live. Put simply, it allows people to download designs and turn them into physical objects by laying down successive layers of material. Replacements or parts for household objects such as toys, utensils and gadgets could become available at the press of a button. With this innovation, however, comes the need to consider impacts on a wide range of forms of intellectual property, as Dr Matthew Rimmer explains. 3D Printing is the latest in a long line of disruptive technologies – including photocopiers, cassette recorders, MP3 players, personal computers, peer to peer networks, and wikis – which have challenged intellectual property laws, policies, practices, and norms. As The Economist has observed, ‘Tinkerers with machines that turn binary digits into molecules are pioneering a whole new way of making things—one that could well rewrite the rules of manufacturing in much the same way as the PC trashed the traditional world of computing.’
Resumo:
Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.
Resumo:
A tactical gaming model for wargame play between two teams A and B through a control unit C has been developed, which can be handled using IBM personal computers (XT and AT models) having a local area network facility. This simulation model involves communication between the teams involved, logging and validation of the actions of the teams by the control unit. The validation procedure uses statistical and also monte carlo techniques. This model has been developed to evaluate the planning strategies of the teams involved. This application software using about 120 files has been developed in BASIC, DBASE and the associated network software. Experience gained in the instruction courses using this model will also be discussed.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
The personal computer has become commonplace on the desk of most scientists. As hardware costs have plummeted, software capabilities have expanded enormously, permitting the scientist to examine extremely large datasets in novel ways. Advances in networking now permit rapid transfer of large datasets, which can often be used unchanged from one machine to the next. In spite of these significant advances, many scientists still use their personal computers only for word processing or e-mail, or as "dumb terminals". Many are simply unaware of the richness of software now available to statistically analyze and display scientific data in highly innovative ways. This paper presents several examples drawn from actual climate data analysis that illustrate some novel and practical features of several widely-used software packages for Macintosh computers.