990 resultados para Computer software maintenance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Because access to new technologies is unequally distributed, there has been considerable debate about the growing gap between the so-called information-rich and information-poor. Such concerns have led to high-profile information technology policy initiatives in many countries. In Australia, in an attempt to 'redress the balance between the information rich and poor' by providing 'equal access to the World Wide Web' (Virtual Communities, 2002), the Australian Council of Trade Unions, Virtual Communities (a computer/software distributor) and Primus (an Internet provider) in late 1999 formed an alliance to offer relatively inexpensive computer and Internet access to union members in order to make 'technology affordable for all Australians' (Virtual Communities, 2002). In this paper, we examine four families, one of which had long-term Information and Communication Technologies (lCT) access, and three of which took advantage of the Virtual Communities offer to get home computer and Internet access for the first time. We examine their engagement with lCT and suggest that previously disadvantaged family members are not particularly advantaged by their access to lCT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The objective was to determine the contribution of transfusion in the past to the risk of current infection with hepatitis B or C among patients attending a large hospital for endoscopic procedures.
STUDY DESIGN AND METHODS: Blood samples had been tested for hepatitis markers by routine methods. Patients completed a comprehensive risk factor questionnaire and results were analyzed using computer software.
RESULTS: Twenty-seven percent of the 2120 participants in the study received transfusions in the past. There was no increase in prevalence of hepatitis B among those transfused. Compared with nontransfused participants, recipients of blood before the implementation of hepatitis C virus (HCV) screening in 1990 had a 4.6-fold increased risk of HCV infection, whereas those transfused with screened blood had a 3-fold increased risk. The difference between the odds ratios for patients before and after screening was not significant.
CONCLUSIONS: Because screening has almost completely eliminated HCV from the blood supply, our finding of a continuing association of HCV infection with transfusion was unexpected. It implies that there are significant other nosocomial risks for hepatitis C transmission associated with the clinical situations where patients received blood. These should be actively investigated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article provides step by step instructions to create recognisable melodies using Microworlds melody program. It describes how the program works to create the notes. Through the writing of a program the user can recreate a familiar melody. The program allows the user to include different instruments and rests in the music, as well as the inclusion of a melody gadget which allows the user to hear notes without the programming.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the past 10 years or so, confidence intervals have become increasingly recognised in program evaluation and quantitative health measurement generally as the preferred way of reporting the accuracy of statistical estimates. Statisticians have found that the more traditional ways of reporting results - using P-values and hypothesis tests - are often very difficult to interpret and can be misleading. This is particularly the case when sample sizes are small and results are 'negative' (ie P>0.05); in these cases, a confidence interval can communicate much more information about the sample and, by inference, about the population. Despite this trend among statisticians and health promotion evaluators towards the use of confidence intervals, it is surprisingly difficult to find succinct and reasonably simple methods to actually compute a confidence interval. This is particularly the case for proportions or percentages. Much of the data which are analysed in health promotion are binary or categorical, rather than the quantities and continuous variables often found in laboratories or other branches of science, so there is a need for health promotion evaluators to be able to present confidence intervals for percentages or proportions. However, the most popular statistical analysis computer package among health promotion professionals, SPSS does not have a routine to compute a simple confidence interval for a proportion! To address this shortcoming, I present in this paper some fairly simple strategies for computing confidence intervals for population percentages, both manually and using the right computer software.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of RHFID (Radio Frequency ldentification) technology can be employed for not only reducing companies management costs but also to track uniquely each shipping container, pallet, case, and product being manufactured, shipped and sold, to increase visibility and accountability in the supply chain. RFID technology connects the supply chain players (i.e., suppliers, manifacturers, distributors, retailers and customers) and allows them to exchange data and product information. Despite these potential benefits, there are challenges and obstacles with the deployment of a RFID-enabled system in the global supply chain. The paper outlines the major RFID issues faced by supply chain management. In this paper, we also present a case study on pharmaceutical supply chain management (SCM) applications by addressing and examining the issues of RFID implementation in a SCM system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The advent of the Internet and the World Wide Web has been instrumental in bringing about the growth in the implementation of web-based information systems (WBIS). Such systems are designed with the aim of improving productivity, data accuracy, and the reduction of paperwork and administrative overheads. Moreover, unlike their conventional non-web-based predecessors, the WBIS are commonly aimed at users who are casual and untrained, geographically distributed and non-homogenous. The dissemination of WBIS necessitates additional infrastructure support in the form of a security system, workflow and transaction management, and web administration. WBIS are commonly developed using an evolutionary approach, whereby the version of the application, acquired from the vendor, is first deployed as a pilot, in order to gather feedback from the target users before the evolutionary cycles commence. While a number of web development methodologies have been proposed by existing research, there is a dearth of empirical evidence that elucidates the experiences of project initiators in pursuing the evolution of web services, a process that undoubtedly involves dealing with stakeholder issues. This research project presents a phenomenological investigation of the experiences of project managers with the implementation of web-based employee service systems (ESS), a domain that has witnessed a sharp growth in Australia in recent times. However, the project managers’ rich, multidimensional account of their experiences with the implementation of ESS revealed the social obstacles and fragility of intra-organizational relationships that demanded a cautious and tactful approach. Thus, the study provides a socio-organizational perspective to web projects in contrast to the functionalist paradigm of current web development methodologies. The research also confirms that consideration of the concerns of stakeholders by project managers is crucial to the successive cycles of ESS evolution. Project managers address stakeholder concerns by pursuing actions that are aimed at encouraging ESS usage, but at the same time, such actions can have consequences necessitating subsequent iterations of system enhancement and improvement. Finally, the research also discovered that despite the different socio-political climate prevalent in various organizations, in which ESS are being implemented, the experiences of project managers in dealing with stakeholder concerns can be captured and independently confirmed in terms of their perceived relevance and usefulness in problem-solving within the application domain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human development has occurred against a timeline that has seen the creation of and diffusion of one innovation after another. These innovations range from language to complex computing and information technologies. The latter are assisting with the distribution of information, and extend to the distribution of the human species beyond the planet Earth. From early times, information has been published and mostly for a fee to the publisher. The absorption and use of information has had a high priority in most societies from early times, and has become institutionalised in universities and institutes of technical learning. For most in Western societies, education is now a matter of ‘lifelong learning’. Today, we see higher education institutions, worldwide, adapting their organisational structures and operating procedures and forming strategic alliances with communications content providers and carriers as well as with information technology companies. Modern educational institutes seek productivity and efficiency. Many also seek to differentiate themselves from competitors. Technological convergence is often seen by management to be a saviour in many educational organisations. It is hoped that lower capital and recurrent costs can be achieved, and that competitors in an increasingly globalised industry can be held at bay by strategic use of knowledge media (Eisenstadt, 1995) commonly associated with distance education in the campus setting. Knowledge media set up costs, intellectual property costs and training costs for staff and students are often so high as to make their use not viable for Australian institutes of higher education. Against this backdrop, one might expect greater educator and student use of publisher produced textbooks and digital enhancements to the textbook, particularly those involved in distance education. A major issue is whether or not the timing of instructor adoption of converging information technology and communications technologies aligns with the wishes of both higher education management and government, and with those who seek commercial gain from the diffusion and adoption of such technologies. Also at issue is whether or not it is possible to explain variance in stated intentions to recommend adoption of new learning technologies in higher education and implementation. Will there occur educator recommendation for adoption of individual knowledge media such as World Wide Web access to study materials by students? And what will be the form of this tool and others used in higher education? This thesis reports on more recent changes in the technological environment and seeks to contribute to an understanding of the factors that lead to a willingness, or unwillingness, on the part of higher education instructors, as influencers and content providers, to utilise these technologies. As such, it is a diffusion study which seeks to fill a gap in the literature. Diffusion studies typically focus on predicting adoption based on characteristics of the potential adopter. Few studies examine the relationship between characteristics of the innovation and adoption. Nearly all diffusion studies involve what is termed discontinuous innovation (Robertson, 1971). That is, the innovation involves adoptees in a major departure from previous practice. This study seeks to examine the relationship between previous experience of related technologies and adoption or rejection of dynamically continuous innovation. Continuous and dynamically continuous innovations are the most numerous in the real world, yet they are numerically the least scrutinised by way of academic research. Moreover, the three-year longitudinal study of educators in Australian and New Zealand meets important criteria laid down by researchers Tornatzky and Klein (1982) and Rogers (1995), that are often not met by similar studies. In particular the study examines diffusion as it is unfolding, rather than selectively examining a single innovation and after the fact, thus avoiding a possible pro-innovation bias. The study examines the situation for both ‘all educators’ and ‘marketing / management educators’ alone in seeking to meet the following aim: Establish if intended adopters of specific knowledge media have had more experience of other computer-based technologies than have those not intending to adopt said knowledge media. The analytical phase entails use of factor analysis and discriminant analysis to conclude that it is possible to discriminate adopters of selected knowledge media based on previous use of related technologies. The study does not find any generalised factor that enables such discrimination among educators. Thus the study supports the literature in part, but fails to find generalised factors that enable unambiguous prediction of knowledge media adoption or otherwise among each grouping of educators examined. The implications are that even in the case of related products and services (continuous or dynamically continuous innovation), there is not statistical certainty that prior usage of related products or technologies is related to intentions to use knowledge media in the future. In this regard, the present study might be said to confirm the view that Rogers and Shoemaker's (1971) conceptualisation of perceived innovation characteristics may only apply to discontinuous innovations (Stratton, Lumpkin & Vitell, 1997). The implications for stakeholders such as higher education management is that when seeking to appoint new educators or existing staff to knowledge media project teams, there is some support for the notion that those who already use World Wide Web based technologies are likely to take these technologies into teaching situations. The same claim cannot be made for computer software use in general, nor Internet use in general.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Requirements engineering is a commencing phase in the development of either software applications or information systems. It is concerned with understanding and specifying the customer's requirements of the system to be delivered. Throughout the literature, this is agreed to be one of the most crucial and, unfortunately, problematic phases in development. Despite the diversity of research directions, approaches and methods, the question of process understanding and management is still limited. Among contemporary approaches to the improvement of the current practice of Requirements Engineering, Formal Object-Oriented Method (FOOM) has been introduced as a new promising solution. The FOOM approach to requirements engineering is based on a synthesis of socio-organisational theory, the object-oriented approach, and mathematical formal specification. The entire FOOM specification process is evolutionary and involves a large volume of changes in requirements. During this process, requirements evolve through various forms of informal, semi-formal, and formal while maintaining a semantic link between these forms and, most importantly, conforming to the customer's requirements. A deep understanding of the complexity of the requirements model and its dynamics is critical in improving requirements engineering process management. This thesis investigates the benefits of documenting both the evolution of the requirements model and the rationale for that evolution. Design explanation explains and justifies the deliberations of, and decisions made during, the design activity. In this thesis, design explanation is used to describe the requirements engineering process in order to improve understandability of, and traceability within, the evolving requirements specification. The design explanation recorded during this research project is also useful in assisting the researcher in gaining insights into the creativity and opportunistic characteristics of the requirements engineering process. This thesis offers an interpretive investigation into incorporating design explanation within FOOM in order to extend and advantage the method. The researcher's interpretation and analysis of collected data highlight an insight-driven and opportunistic process rather than a strictly and systematically predefined one. In fact, the process was not smoothly evolutionary, but involved occasional 'crisis' points at which the model was reconceptualised, simplified and restructured. Therefore, contributions of the thesis lie not only in an effective incorporation of design explanation within FOOM, but also a deep understanding of the dynamic process of requirements engineering. The new understanding of the complexity of the requirements model and its dynamics suggests new directions for future research and forms a basis for a new approach to process management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people's perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electronic commerce and the Internet have created demand for automated systems that can make complex decisions utilizing information from multiple sources. Because the information is uncertain, dynamic, distributed, and heterogeneous in nature, these systems require a great diversity of intelligent techniques including expert systems, fuzzy logic, neural networks, and genetic algorithms. However, in complex decision making, many different components or sub-tasks are involved, each of which requires different types of processing. Thus multiple such techniques are required resulting in systems called hybrid intelligent systems. That is, hybrid solutions are crucial for complex problem solving and decision making. There is a growing demand for these systems in many areas including financial investment planning, engineering design, medical diagnosis, and cognitive simulation. However, the design and development of these systems is difficult because they have a large number of parts or components that have many interactions. From a multi-agent perspective, agents in multi-agent systems (MAS) are autonomous and can engage in flexible, high-level interactions. MASs are good at complex, dynamic interactions. Thus a multi-agent perspective is suitable for modeling, design, and construction of hybrid intelligent systems. The aim of this thesis is to develop an agent-based framework for constructing hybrid intelligent systems which are mainly used for complex problem solving and decision making. Existing software development techniques (typically, object-oriented) are inadequate for modeling agent-based hybrid intelligent systems. There is a fundamental mismatch between the concepts used by object-oriented developers and the agent-oriented view. Although there are some agent-oriented methodologies such as the Gaia methodology, there is still no specifically tailored methodology available for analyzing and designing agent-based hybrid intelligent systems. To this end, a methodology is proposed, which is specifically tailored to the analysis and design of agent-based hybrid intelligent systems. The methodology consists of six models - role model, interaction model, agent model, skill model, knowledge model, and organizational model. This methodology differs from other agent-oriented methodologies in its skill and knowledge models. As good decisions and problem solutions are mainly based on adequate information, rich knowledge, and appropriate skills to use knowledge and information, these two models are of paramount importance in modeling complex problem solving and decision making. Follow the methodology, an agent-based framework for hybrid intelligent system construction used in complex problem solving and decision making was developed. The framework has several crucial characteristics that differentiate this research from others. Four important issues relating to the framework are also investigated. These cover the building of an ontology for financial investment, matchmaking in middle agents, reasoning in problem solving and decision making, and decision aggregation in MASs. The thesis demonstrates how to build a domain-specific ontology and how to access it in a MAS by building a financial ontology. It is argued that the practical performance of service provider agents has a significant impact on the matchmaking outcomes of middle agents. It is proposed to consider service provider agents' track records in matchmaking. A way to provide initial values for the track records of service provider agents is also suggested. The concept of ‘reasoning with multimedia information’ is introduced, and reasoning with still image information using symbolic projection theory is proposed. How to choose suitable aggregation operations is demonstrated through financial investment application and three approaches are proposed - the stationary agent approach, the token-passing approach, and the mobile agent approach to implementing decision aggregation in MASs. Based on the framework, a prototype was built and applied to financial investment planning. This prototype consists of one serving agent, one interface agent, one decision aggregation agent, one planning agent, four decision making agents, and five service provider agents. Experiments were conducted on the prototype. The experimental results show the framework is flexible, robust, and fully workable. All agents derived from the methodology exhibit their behaviors correctly as specified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Preliminary research into the critical factors associated with software development/implementation identified three dimensions for successful implementation based on alignment of the requirements engineering process with business needs, change management process and quality of the implementation process. Research results demonstrate the link between the conceptual model for process quality and the process management attributes determined during the research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional optimisation methods are incapable of capturing the complexity of today's dynamic manufacturing systems. A new methodology, integrating simulation models and intelligent learning agents, was successfully applied to identify solutions to a fundamental scheduling problem. The robustness of this approach was then demonstrated through a series of real-world industrial applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fraud and deception in online marketplaces has been an on-going problem. This thesis proposes novel techniques and mechanisms using agent technology to protect buyers and sellers in online environments such as eBay. The proposed solution has been rigorously tested and the results show good commercial promise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless sensor networks lifetime is prolonged through a dynamic scheme for collecting sensory information using intelligent mobile elements. The data collection routes are optimised for fast and reliable delivery. The scheme minimises high levels of energy consumption to extend the network operational time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A multi-agent system is a complex software system which is composed of many relative autonomous smaller softwares called agents. The research on multi-agent systems is concerned with the interaction and coordination among these agents to let them help each other to solve complicated problems, such as finance investment management. The principal contributions represented by these 50 selected papers are "cooperation under uncertainty in distributed expert systems (DESs)", "a tool and algorithms to build DESs", and "information gathering and decision making in multi-agent systems (MASs)".