888 resultados para Diffusion of computers


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using the volatile memory of other computers within a system to store checkpoints is an alternative to the traditional approach of using stable storage. The objective of this study is to develop a storage mechanism using at-least-k delivery semantics. This semantics allows data to be saved to a minimum number of computers simultaneously using group communications, without requiring that each group computer successfully acknowledge the receipt. The new storage mechanism is implemented in the GENESIS checkpointing facility v2.0. The results showed that at-least-k storage mechanisms provide low storage latency times; however, the incurred execution overheads on the applications executing within the system are higher than that when using remote stable storage to store checkpoints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Australia adopted digital TV (DTV) on January 1, 2001 but due to slow adoption by end users, the deadline to discontinue the analog signal has so far been postponed twice. This paper examines the history and current status of DTV adoption in Australia with reference to theories of adoption and diffusion and the Justification Model of Technology and why end users
appear reluctant to adopt-in spite of affordable converters. End user opinions are examined on ‘why they do not adopt’ and ‘what may encourage them to adopt’, using public submissions to the 2005 parliamentary ‘Inquiry into the uptake of digital TV in Australia’. The paper advocates relevant media literacy programs to address the low public awareness of DTV and its benefits because its rejection may result in less affluent end users losing the chance to receive a range of convergent services in the future via the ubiquitous and affordable television.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to explore the effects of diets containing saturated fatty acids (SFA), monounsaturated fatty acids (MUFA), and ω-3 and ω-6 polyunsaturated fatty acids (ω-3 and ω-6 PUFA, respectively) on the passive and active transport properties of rat jejunum using marker compounds. Rats were fed diets supplemented with 18.4% (w/w) lipid (4 groups) or standard rat chow (1 group) for a period of 30 days. At the end of the dietary period, mucosal scrapings were taken for the determination of membrane phospholipids, and the apparent jejunal permeability of radiolabelled marker compounds was determined using modified Ussing chambers. Changes in the phospholipid content of the brush border membrane reflected the different lipid content of the diets. The passive paracellular permeability of mannitol was not significantly affected by the fatty acid composition of the diet, although there was a trend toward decreased mannitol permeability in the rats fed both the ω-3 and ω-6 PUFA diets. In comparison, the transcellular diffusion of diazepam was reduced by 20% (P < 0.05) in rats fed diets supplemented with ω-3 and ω-6 PUFA. In the lipid-fed rats, the serosal to mucosal flux of digoxin, an intestinal P-glycoprotein substrate, was reduced by 20% (P < 0.05) relative to the chow-fed group, however there were no significant differences between the different lipid groups. The active absorption of D-glucose via the Na+-dependent transport pathway was highest in the SFA, MUFA and PUFA ω-3 dietary groups, intermediate in the low-fat chow group and lowest in the PUFA ω-6 group, and was positively correlated with short-circuit current. These studies indicate that dietary fatty acid changes can result in moderate changes to the active and passive transport properties of excised rat jejunum.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on a study on the perceived effectiveness of educational resources within the context of a single course in a first-year biology program at the University of Sydney (Australia). The overall study examined the dynamic state of perceptions towards these resources by the major stakeholders involved with the course (students, teaching staff, and technical staff). A major focus of the research was the extent to which the students used the computer-based resources made available to them, and staff and students' perceptions of the usefulness of these resources in supporting their learning. Specifically, results are discussed related to student use of computers and the Internet, use of biology online materials in the virtual learning environment, use and perceptions of communication technologies, and use and perceptions of computer-based online resources. Data were collected from the students using surveys and focus groups and from staff using surveys and interviews within an action-research paradigm. While the majority of students found the resources to be of use in supporting learning, some did not find them useful, and some did not use them at all. In comparison, the staff had higher expectations of both usage and usefulness. The level of student use was not a function of access to computers or the Internet, so the findings suggest that the provision of online resources will not necessarily generate value-added learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Debates continue about the relative benefits, costs and risks of the diffusion of computer-based technologies throughout society and schooling. One area that has received considerable attention is gender equity. Early work on gender and computers focused on differences between male and female access and use (e.g. Huff, Fleming & Cooper, 1992; Kirkman, 1993; Morritt, 1997; Nelson & Cooper, 1997; Sofia, 1993), with concerns focused on the potential for girls to be disadvantaged. In some respects, it is arguable that problems of gender equity in schools with respect to computers have been overcome. For example, in a small study I conducted in two New Zealand senior primary classrooms in 2003, I found that both boys and girls were motivated to use computers and appeared to have equal opportunities to access computers in the classroom. The students in my study expressed a belief in the importance of using computers, and this belief can also be discerned from educational policy and media coverage. In this paper I argue that, although gender by itself no longer appears to be a source of disadvantage in terms of access to and use of computers in schools, many questions about technology, schooling and power relations still remain unanswered. I present two alternative viewpoints on the new digital age. First, I explore Melanie Stewart Millar’s (1998) analysis of digital discourse as one which reproduces the power of white, middle-class, educated, well-paid males, and excludes anything else it considers ‘Other’. Second, I review arguments that the digital age has provided sites for the transcendence of traditional hierarchies and inequalities (e.g. Spender, 1995). I conclude that, despite the discrepancies between these two viewpoints, both concur that technological disadvantage will exacerbate any existing inequality that might result from intersections of identity categories, such as, gender, ethnicity, age, and socio-economic status, and that in this digital age, technological efficacy is arguably a new identity category.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of computers is becoming more widespread in education and in the wider Australian community. This communication reports the results of surveys of two cohorts of first year undergraduate students at The University of Western Australia and Deakin University, conducted at the beginning of the 2001 academic year. The surveys confirm that general IT skills among students are increasing, but that the level of skill is variable. This is consistent with a similar survey at Deakin University which was conducted at the beginning of 2000.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a photocatalytic reduction process when products formed are not effectively desorbed, they could hinder the diffusion of intermediates on the surface of the catalyst, as well as increase the chance of collisions among the products, resulting photo-oxidation in a reserve reaction on the surface. This paper analyses a simple kinetic model incorporating the coupled effect of the adsorptive photocatalytic reduction and oxidation. The development is based on Langmuir–Hinshelwood mechanism to model the formation rates of hydrogen and methane through photocatalytic reduction of carbon dioxide with water vapour. Experimental data obtained from literatures have achieved a very good fit. Such model could aid as a tool for related areas of studies. A comparative study using the model developed, showed that product concentration in term of ppm would be an effective measurement of product yields through photocatalytic reduction of carbon dioxide with water vapour.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Prismatic boron nitride nanorods have been grown on single crystal silicon substrates by mechanical ball-milling followed by annealing at 1300 °C. Growth takes place by rapid surface diffusion of BN molecules, and follows heterogeneous nucleation at catalytic particles of an Fe/Si alloy. Lattice imaging transmission electron microscopy studies reveal a central axial row of rather small truncated pyramidal nanovoids on each nanorod, surrounded by three basal planar BN domains which, with successive deposition of epitaxial layers adapt to the void geometry by crystallographic faceting. The bulk strain in the nanorods is taken up by the presence of what appear to be simple nanostacking faults in the external, near-surface domains which, like the nanovoids are regularly repetitive along the nanorod length. Growth terminates with a clear cuneiform tip for each nanorod. Lateral nanorod dimensions are essentially determined by the size of the catalytic particle, which remains as a foundation essentially responsible for base growth. Growth, structure, and dominating facets are shown to be consistent with a system which seeks lowest bulk and surface energies according to the well-known thermodynamics of the capillarity of solids.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we report on a project that aimed to evaluate the potential of the Internet to reduce social isolation amongst the elderly, and thereby, improve psychosocial functioning. Twenty residents of a retirement village volunteered to be given access to, and training in, the use of computers and the Internet. After 3 months, they exhibited little change in measures of self-esteem, positive affect, personal well-being, optimism and social connectedness. However, they reported that they found the use of the Internet to be of great benefit. Over the 12 months of the study 12 participants discontinued their involvement for a variety of reasons. After 12 months, the eight participants who remained in the study again reported a range of positive outcomes however, quantitative survey data did not confirm these findings of a generally-positive experience. This discrepancy between the qualitative (interview) data and the quantitative (survey) data suggests that impact of the Internet on the wellbeing of the elderly may be more complex than suggested, and broader than was assessed psychometrically. We make specific recommendations about the introduction of computers to elderly with care both in how participants are selected and how their well being is monitored subsequently.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computers of a non-dedicated cluster are often idle (users attend meetings, have lunch or coffee breaks) or lightly loaded (users carry out simple computations to support problem solving activities). These underutilised computers can be employed to execute parallel applications. Thus, these computers can be shared by parallel and sequential applications, which could lead to the improvement of their execution performance. However, there is a lack of experimental study showing the applications’ performance and the system utilization of executing parallel and sequential applications concurrently and concurrent execution of multiple parallel applications on a non-dedicated cluster. Here we present the result of an experimental study into load balancing based scheduling of mixtures of NAS Parallel Benchmarks and BYTE sequential applications on a very low cost non-dedicated cluster. This study showed that the proposed sharing provided performance boost as compared to the execution of the parallel load in isolation on a reduced number of computers and better cluster utilization. The results of this research were used not only to validate other researchers’ result generated by simulation but also to support our research mission of widening the use of non-dedicated clusters. Our promising results obtained could promote further research studies to convince universities, business and industry, which require a large amount of computing resources, to run parallel applications on their already owned non-dedicated clusters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of information is preceded by its availability. For post-industrial economies to exploit information to full potential it is important for knowledge to be free of vested-interest censorship and manipulation. History suggests that a range of vested-interests have manipulated explicit> information availability through various forms of sectarian, state and business manipulation of the systems of information storage and transfer. The OECD 1996 report "The Knowledge-Based Economy" recognized that the diffusion of knowledge was as significant as its creation, and that knowledge distribution networks were crucial to innovation, production processes and product development. The success of enterprises and national economies is considered reliant on the effectiveness of their ability to gather, distribute and utilize knowledge. The increasing need for ready access (of information that might become knowledge) in accordance with the OEDC definition is particularly relevant to this paper as it assumes infrastructures capable of providing that need. Wherever there are infrastructures there are opportunities to benefit from them, either for profit or power. This paper considers the implications of sectarian, state and business-model control over the selective content, storage and dissemination of information and knowledge, both from historical and current perspectives. The advent of new technologies and how they have enabled the flow of information adds new dimensions to knowledge control but the quality of knowledge is less certain and who controls or influences distribution of knowledge less transparent. It could be argued that at each step in the development of knowledge distribution networks, knowledge and its distribution, is not free of the possibility of third-party vested interest.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the current work, two different coatings, nitrocarburised (CN) and titanium carbonitride (TiCN) on M2 grade high speed tool steel, were prepared by commercial diffusion and physical vapour deposition (PVD) techniques, respectively. Properties of the coating were characterised using a variety of techniques such as Glow-Discharge Optical Emission Spectrometry (GD-OES) and Scanning Electron Microscopy (SEM). Three non-commercial, oil-based lubricants with simplified formulations were used for this study. A tribological test was developed in which two nominally geometrically-identical crossed cylinders slide over each other under selected test conditions. This test was used to evaluate the effectiveness of a pre-applied lubricant film and a surface coating for various conditions of sliding wear. Engineered surface coatings can significantly improve wear resistance of the tool surface but their sliding wear performances strongly depend on the type of coating and lubricant combination used. These coating-lubricant interactions can also have a very strong effect on the useful life of the lubricant in a tribological system. Better performance of lubricants during the sliding wear testing was achieved hen used with the nitrocarburised (CN) coating. To understand the nature of the interactions and their possible effects on the coating-lubricant system, several surface analysis techniques were used. The molecular level investigation of Fourier Transform Infrared Spectroscopy (FTIR) revealed that oxidative degradation occurred in all used oil-based lubricants during the sliding wear test but the degradation behaviour of oil-based lubricants varied with the coating-lubricant system and the wear conditions. The main differences in the carbonyl oxidation region of the FTIR spectra (1900-1600 cm-1) between different coating-lubricant systems may relate to the effective lifetime of the lubricant during the sliding wear test. Secondary Ion Mass Spectrometry (SIMS) depth profiling shows that the CN coating has the highest lubricant absorbability among the tested tool surfaces. Diffusion of chlorine (C1), hydrogen (H) and oxygen (O) into the surface of subsurface of the tool suggested that strong interactions occurred between lubricant and tool surface during the sliding wear test. The possible effects of the interactions on the performance of whole tribological system are also discussed. The study of Time-of-Flight Secondary Ion Mass Spectrometry (TOF-SIMS) indicated that the envelope of hydrocarbons (CmHn) of oil lubricant in the positive TOF-SIMS spectra shifted to lower mass fragment after the sliding wear testing due to the breakage of long-chain hydrocarbons to short-chain ones during the degradation of lubricant. The shift of the mass fragment range of the hydrocarbon (CmHn) envelope caries with the type of both tool surface and lubricant, again confirming that variation in the performance of the tool-lubricant system relates to the changes in surface chemistry due to tribochemical interactions at the tool-lubricant interface under sliding wear conditions. The sliding wear conditions resulted in changes not only in topography of the tool surface due to mechanical interactions, as outlined in Chapter 5, but also in surface chemistry due to tribochemical interactions, as discussed in Chapters 6 and 7.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Information technology research over the past two decades suggests that the installation and use of computers fundamentally affects the structure and function of organisations and, m particular, the workers in these organizations. Following the release of the IBM Personal Computer in 1982, microcomputers have become an integral part of most work environments. The accounting services industry, in particular, has felt the impact of this ‘microcomputer revolution’. In Big Six accounting firms, there is almost one microcomputer for each professional accountant employed, Notwithstanding this, little research has been done on the effect of microcomputers on the work outcomes of professional accountants working in these firms. This study addresses this issue. It assesses, in an organisational setting, how accountant’ perceptions of ease of use and usefulness of microcomputers act on their computer anxieties, microcomputer attitudes and use to affect their job satisfaction and job performance. The research also examines how different types of human-computer interfaces affect the relationships between accountants' beliefs about microcomputer utility and ease of use, computer anxiety, microcomputer attitudes and microcomputer use. To attain this research objective, a conceptual model was first developed, The model indicates that work outcomes (job satisfaction and job performance) of professional accountants using microcomputers are influenced by users' perceptions of ease of use and usefulness of microcomputers via paths through (a) the level of computer anxiety experienced by users, (b) the general attitude of users toward using microcomputers, and (c) the extent to which microcomputers are used by individuals. Empirically testable propositions were derived from the model to test the postulated relationships between these constructs. The study also tested whether or not users of different human-computer interfaces reacted differently to the perceptions and anxieties they hold about microcomputers and their use in the workplace. It was argued that users of graphical interfaces, because of the characteristics of those interfaces, react differently to their perceptions and anxieties about microcomputers compared with users of command-line (or textual-based) interfaces. A passive-observational study in a field setting was used to test the model and the research propositions. Data was collected from 164 professional accountants working in a Big Six accounting firm in a metropolitan city in Australia. Structural equation modelling techniques were used to test the, hypothesised causal relationships between the components comprising the general research model. Path analysis and ordinary least squares regression was used to estimate the parameters of the model and analyse the data obtained. Multisample analysis (or stacked model analysis) using EQS was used to test the fit of the model to the data of the different human-computer interface groups and to estimate the parameters for the paths in those different groups. The results show that the research model is a good description of the data. The job satisfaction of professional accountants is directly affected by their attitude toward using microcomputers and by microcomputer use itself. However, job performance appears to be only directly affected by microcomputer attitudes. Microcomputer use does not directly affect job performance. Along with perceived ease of use and perceived usefulness, computer anxiety is shown to be an important determinant of attitudes toward using microcomputers - higher levels of computer anxiety negatively affect attitudes toward using microcomputers. Conversely, higher levels of perceived ease of use and perceived usefulness heighten individuals' positive attitudes toward using microcomputers. Perceived ease of use and perceived usefulness also indirectly affect microcomputer attitudes through their effect on computer anxiety. The results show that higher levels of perceived ease of use and perceived usefulness result in lower levels of computer anxiety. A surprising result from the study is that while perceived ease of use is shown to directly affect the level of microcomputer usage, perceived usefulness and attitude toward using microcomputers does not. The results of the multisample analysis confirm that the research model fits the stacked model and that the stacked model is a significantly better fit if specific parameters are allowed to vary between the two human-computer interface user groups. In general, these results confirm that an interaction exists between the type of human-computer interface (the variable providing the grouping) and the other variables in the model The results show a clear difference between the two groups in the way in which perceived ease of use and perceived usefulness affect microcomputer attitude. In the case of users of command-line interfaces, these variables appear to affect microcomputer attitude via an intervening variable, computer anxiety, whereas in the graphical interface user group the effect occurs directly. Related to this, the results show that perceived ease of use and perceived usefulness have a significant direct effect on computer anxiety in command-line interface users, but no effect at all for graphical interface users. Of the two exogenous variables only perceived ease of use, and that in the case of the command-line interface users, has a direct significant effect on extent of use of microcomputers. In summary, the research has contributed to the development of a theory of individual adjustment to information technology in the workplace. It identifies certain perceptions, anxieties and attitudes about microcomputers and shows how they may affect work outcomes such as job satisfaction and job performance. It also shows that microcomputer-interface types have a differential effect on some of the hypothesised relationships represented in the general model. Future replication studies could sample a broader cross-section of the microcomputer user community. Finally, the results should help Big Six accounting firms to maximise the benefits of microcomputer use by making them aware of how working with microcomputers affects job satisfaction and job performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human development has occurred against a timeline that has seen the creation of and diffusion of one innovation after another. These innovations range from language to complex computing and information technologies. The latter are assisting with the distribution of information, and extend to the distribution of the human species beyond the planet Earth. From early times, information has been published and mostly for a fee to the publisher. The absorption and use of information has had a high priority in most societies from early times, and has become institutionalised in universities and institutes of technical learning. For most in Western societies, education is now a matter of ‘lifelong learning’. Today, we see higher education institutions, worldwide, adapting their organisational structures and operating procedures and forming strategic alliances with communications content providers and carriers as well as with information technology companies. Modern educational institutes seek productivity and efficiency. Many also seek to differentiate themselves from competitors. Technological convergence is often seen by management to be a saviour in many educational organisations. It is hoped that lower capital and recurrent costs can be achieved, and that competitors in an increasingly globalised industry can be held at bay by strategic use of knowledge media (Eisenstadt, 1995) commonly associated with distance education in the campus setting. Knowledge media set up costs, intellectual property costs and training costs for staff and students are often so high as to make their use not viable for Australian institutes of higher education. Against this backdrop, one might expect greater educator and student use of publisher produced textbooks and digital enhancements to the textbook, particularly those involved in distance education. A major issue is whether or not the timing of instructor adoption of converging information technology and communications technologies aligns with the wishes of both higher education management and government, and with those who seek commercial gain from the diffusion and adoption of such technologies. Also at issue is whether or not it is possible to explain variance in stated intentions to recommend adoption of new learning technologies in higher education and implementation. Will there occur educator recommendation for adoption of individual knowledge media such as World Wide Web access to study materials by students? And what will be the form of this tool and others used in higher education? This thesis reports on more recent changes in the technological environment and seeks to contribute to an understanding of the factors that lead to a willingness, or unwillingness, on the part of higher education instructors, as influencers and content providers, to utilise these technologies. As such, it is a diffusion study which seeks to fill a gap in the literature. Diffusion studies typically focus on predicting adoption based on characteristics of the potential adopter. Few studies examine the relationship between characteristics of the innovation and adoption. Nearly all diffusion studies involve what is termed discontinuous innovation (Robertson, 1971). That is, the innovation involves adoptees in a major departure from previous practice. This study seeks to examine the relationship between previous experience of related technologies and adoption or rejection of dynamically continuous innovation. Continuous and dynamically continuous innovations are the most numerous in the real world, yet they are numerically the least scrutinised by way of academic research. Moreover, the three-year longitudinal study of educators in Australian and New Zealand meets important criteria laid down by researchers Tornatzky and Klein (1982) and Rogers (1995), that are often not met by similar studies. In particular the study examines diffusion as it is unfolding, rather than selectively examining a single innovation and after the fact, thus avoiding a possible pro-innovation bias. The study examines the situation for both ‘all educators’ and ‘marketing / management educators’ alone in seeking to meet the following aim: Establish if intended adopters of specific knowledge media have had more experience of other computer-based technologies than have those not intending to adopt said knowledge media. The analytical phase entails use of factor analysis and discriminant analysis to conclude that it is possible to discriminate adopters of selected knowledge media based on previous use of related technologies. The study does not find any generalised factor that enables such discrimination among educators. Thus the study supports the literature in part, but fails to find generalised factors that enable unambiguous prediction of knowledge media adoption or otherwise among each grouping of educators examined. The implications are that even in the case of related products and services (continuous or dynamically continuous innovation), there is not statistical certainty that prior usage of related products or technologies is related to intentions to use knowledge media in the future. In this regard, the present study might be said to confirm the view that Rogers and Shoemaker's (1971) conceptualisation of perceived innovation characteristics may only apply to discontinuous innovations (Stratton, Lumpkin & Vitell, 1997). The implications for stakeholders such as higher education management is that when seeking to appoint new educators or existing staff to knowledge media project teams, there is some support for the notion that those who already use World Wide Web based technologies are likely to take these technologies into teaching situations. The same claim cannot be made for computer software use in general, nor Internet use in general.