100 resultados para Socialist Systems and Transitional Economies: General
Resumo:
Information Technology (IT) is an important resource that facilitates growth and development in both the developed and emerging economies. The increasing forces of globalization are creating a wider digital divide between the developed and emerging economies. The smaller emerging economies are the most venerable. Intense competition for IT resources means that these emerging economies would need to acquire a deeper understanding of how to source and evaluate their IT-related efforts. This effort would put these economies in a better position to source funding from various stakeholders. This research presents a complementary approach to securing better IT-related business value in organizations in the South Pacific Island countries – a case of emerging economies. Analysis of data collected from six South Pacific Island countries suggests that organizations that invest in IT and related complementaries are able to better their business processes. The data also suggest that improved business processes lead to overall business processes improvements.
Resumo:
This paper provides a new general approach for defining coherent generators in power systems based on the coherency in low frequency inter-area modes. The disturbance is considered to be distributed in the network by applying random load changes which is the random walk representation of real loads instead of a single fault and coherent generators are obtained by spectrum analysis of the generators velocity variations. In order to find the coherent areas and their borders in the inter-connected networks, non-generating buses are assigned to each group of coherent generator using similar coherency detection techniques. The method is evaluated on two test systems and coherent generators and areas are obtained for different operating points to provide a more accurate grouping approach which is valid across a range of realistic operating points of the system.
Resumo:
Debates about user-generated content (UGC) often depend on a contrast with its normative opposite, the professionally produced content that is supported and sustained by commercial media businesses or public organisations. UGC is seen to appear within or in opposition to professional media, often as a disruptive, creative, change-making force. Our suggestion is to position UGC not in opposition to professional or "producer media", or in hybridised forms of subjective combination with it (the so-called "pro-sumer" or "pro-am" system), but in relation to different criteria, namely the formal and informal elements in media industries. In this article, we set out a framework for the comparative and historical analysis of UGC systems and their relations with other formal and informal media activity, illustrated with examples ranging from games to talkback radio. We also consider the policy implications that emerge from a historicised reading of UGC as a recurring dynamic within media industries, rather than a manifestation of consumer agency specific to digital cultures.
Resumo:
Pandemics are for the most part disease outbreaks that become widespread as a result of the spread of human-to-human infection. Beyond the debilitating, sometimes fatal, consequences for those directly affected, pandemics have a range of negative social, economic and political consequences. These tend to be greater where the pandemic is a novel pathogen, has a high mortality and/or hospitalization rate and is easily spread. According to Lee Jong-wook, former Director-General of the World Health Organization (WHO), pandemics do not respect international borders. Therefore, they have the potential to weaken many societies, political systems and economies simultaneously.
Resumo:
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Resumo:
The business value of Enterprise Resource Planning systems (ERP systems), and in general large software implementations, has been extensively debated in both popular press and in the academic literature for over two decades. Organisations invest enormous sums of money and resources in Enterprise Resource Planning systems (and related infrastructure), presumably expecting positive impacts to the organisation and its functions. Some studies have reported large productivity improvements and substantial benefits from ERP systems, while others have reported that ERP systems have not had any bottom-line impact. This paper discusses initial findings from a study that focuses on identifying and assessing important ERP impacts in 23 Australian public sector organizations.
Resumo:
Enterprise systems are located within the antinomy of appearing as generic product, while being means of multiple integrations for the user through configuration and customisation. Technological and organisational integrations are defined by architectures and standardised interfaces. Until recently, technological integration of enterprise systems has been supported largely by monolithic architectures that were designed, and maintained by the respective developers. From a technical perspective, this approach had been challenged by the suggestion of component-based enterprise systems that would allow for a more user-focused system through strict modularisation. Lately, the product nature of software as proprietary item has been questioned through the rapid increase of open source programs that are being used in business computing in general, and also within the overall portfolio that makes up enterprise systems. This suggests the potential for altered technological and commercial constellations for the design of enterprise systems, which are presented in different scenarios. The technological and commercial decomposition of enterprise software and systems may also address some concerns emerging from the users’ experience of those systems, and which may have arisen from their proprietary or product nature.
Resumo:
A range of influences, both technical and organisational, has encouraged the wide spread adoption of Enterprise Systems (ES). Nevertheless, there is a growing consensus that Enterprise Systems have in many cases failed to provide expected benefits. The increasing role of, and dependency on ES (and IT in general), and the ‘uncertainty’ of these large investments, have created a strong need to monitor and measure ES performance. This paper reports on a research project aimed at deriving an ‘Enterprise Systems benefits measurement instrument’. The research seeks to identify how Enterprise Systems benefits can be usefully measured, with a ‘balance’ between qualitative and quantitative factors.
Resumo:
A range of influences, both technical and organizational, has encouraged the widespread adoption of Enterprise Systems (ES). The integrated and process-oriented nature of Enterprise Systems has led organizations to use process modelling as a means of managing the complexity of these systems, and to aid in achieving business goals. Past research illustrates how process modelling is applied across different Enterprise Systems lifecycle phases. However, no empirical evidence exists to evaluate what factors are essential for a successful process modelling initiative, in general or in an ES context. This research-in-progress paper reports on an empirical investigation of the factors that influence process modelling success. It presents an a-priori process modelling critical-success-factors-model, describes its derivation, and concludes with an outlook to the next stages of the research.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
Developing an effective impact evaluation framework, managing and conducting rigorous impact evaluations, and developing a strong research and evaluation culture within development communication organisations presents many challenges. This is especially so when both the community and organisational context is continually changing and the outcomes of programs are complex and difficult to clearly identify.----- This paper presents a case study from a research project being conducted from 2007-2010 that aims to address these challenges and issues, entitled Assessing Communication for Social Change: A New Agenda in Impact Assessment. Building on previous development communication projects which used ethnographic action research, this project is developing, trailing and rigorously evaluating a participatory impact assessment methodology for assessing the social change impacts of community radio programs in Nepal. This project is a collaboration between Equal Access – Nepal (EAN), Equal Access – International, local stakeholders and listeners, a network of trained community researchers, and a research team from two Australian universities. A key element of the project is the establishment of an organisational culture within EAN that values and supports the impact assessment process being developed, which is based on continuous action learning and improvement. The paper describes the situation related to monitoring and evaluation (M&E) and impact assessment before the project began, in which EAN was often reliant on time-bound studies and ‘success stories’ derived from listener letters and feedback. We then outline the various strategies used in an effort to develop stronger and more effective impact assessment and M&E systems, and the gradual changes that have occurred to date. These changes include a greater understanding of the value of adopting a participatory, holistic, evidence-based approach to impact assessment. We also critically review the many challenges experienced in this process, including:----- • Tension between the pressure from donors to ‘prove’ impacts and the adoption of a bottom-up, participatory approach based on ‘improving’ programs in ways that meet community needs and aspirations.----- • Resistance from the content teams to changing their existing M&E practices and to the perceived complexity of the approach.----- • Lack of meaningful connection between the M&E and content teams.----- • Human resource problems and lack of capacity in analysing qualitative data and reporting results.----- • The contextual challenges, including extreme poverty, wide cultural and linguistic diversity, poor transport and communications infrastructure, and political instability.----- • A general lack of acceptance of the importance of evaluation within Nepal due to accepting everything as fate or ‘natural’ rather than requiring investigation into a problem.
Resumo:
The availability of innumerable intelligent building (IB) products, and the current dearth of inclusive building component selection methods suggest that decision makers might be confronted with the quandary of forming a particular combination of components to suit the needs of a specific IB project. Despite this problem, few empirical studies have so far been undertaken to analyse the selection of the IB systems, and to identify key selection criteria for major IB systems. This study is designed to fill these research gaps. Two surveys: a general survey and the analytic hierarchy process (AHP) survey are proposed to achieve these objectives. The first general survey aims to collect general views from IB experts and practitioners to identify the perceived critical selection criteria, while the AHP survey was conducted to prioritize and assign the important weightings for the perceived criteria in the general survey. Results generally suggest that each IB system was determined by a disparate set of selection criteria with different weightings. ‘Work efficiency’ is perceived to be most important core selection criterion for various IB systems, while ‘user comfort’, ‘safety’ and ‘cost effectiveness’ are also considered to be significant. Two sub-criteria, ‘reliability’ and ‘operating and maintenance costs’, are regarded as prime factors to be considered in selecting IB systems. The current study contributes to the industry and IB research in at least two aspects. First, it widens the understanding of the selection criteria, as well as their degree of importance, of the IB systems. It also adopts a multi-criteria AHP approach which is a new method to analyse and select the building systems in IB. Further research would investigate the inter-relationship amongst the selection criteria.
Resumo:
The Open and Trusted Health Information Systems (OTHIS) Research Group has formed in response to the health sector’s privacy and security requirements for contemporary Health Information Systems (HIS). Due to recent research developments in trusted computing concepts, it is now both timely and desirable to move electronic HIS towards privacy-aware and security-aware applications. We introduce the OTHIS architecture in this paper. This scheme proposes a feasible and sustainable solution to meeting real-world application security demands using commercial off-the-shelf systems and commodity hardware and software products.
Resumo:
The book within which this chapter appears is published as a research reference book (not a coursework textbook) on Management Information Systems (MIS) for seniors or graduate students in Chinese universities. It is hoped that this chapter, along with the others, will be helpful to MIS scholars and PhD/Masters research students in China who seek understanding of several central Information Systems (IS) research topics and related issues. The subject of this chapter - ‘Evaluating Information Systems’ - is broad, and cannot be addressed in its entirety in any depth within a single book chapter. The chapter proceeds from the truism that organizations have limited resources and those resources need to be invested in a way that provides greatest benefit to the organization. IT expenditure represents a substantial portion of any organization’s investment budget and IT related innovations have broad organizational impacts. Evaluation of the impact of this major investment is essential to justify this expenditure both pre- and post-investment. Evaluation is also important to prioritize possible improvements. The chapter (and most of the literature reviewed herein) admittedly assumes a blackbox view of IS/IT1, emphasizing measures of its consequences (e.g. for organizational performance or the economy) or perceptions of its quality from a user perspective. This reflects the MIS emphasis – a ‘management’ emphasis rather than a software engineering emphasis2, where a software engineering emphasis might be on the technical characteristics and technical performance. Though a black-box approach limits diagnostic specificity of findings from a technical perspective, it offers many benefits. In addition to superior management information, these benefits may include economy of measurement and comparability of findings (e.g. see Part 4 on Benchmarking IS). The chapter does not purport to be a comprehensive treatment of the relevant literature. It does, however, reflect many of the more influential works, and a representative range of important writings in the area. The author has been somewhat opportunistic in Part 2, employing a single journal – The Journal of Strategic Information Systems – to derive a classification of literature in the broader domain. Nonetheless, the arguments for this approach are believed to be sound, and the value from this exercise real. The chapter drills down from the general to the specific. It commences with a highlevel overview of the general topic area. This is achieved in 2 parts: - Part 1 addressing existing research in the more comprehensive IS research outlets (e.g. MISQ, JAIS, ISR, JMIS, ICIS), and Part 2 addressing existing research in a key specialist outlet (i.e. Journal of Strategic Information Systems). Subsequently, in Part 3, the chapter narrows to focus on the sub-topic ‘Information Systems Success Measurement’; then drilling deeper to become even more focused in Part 4 on ‘Benchmarking Information Systems’. In other words, the chapter drills down from Parts 1&2 Value of IS, to Part 3 Measuring Information Systems Success, to Part 4 Benchmarking IS. While the commencing Parts (1&2) are by definition broadly relevant to the chapter topic, the subsequent, more focused Parts (3 and 4) admittedly reflect the author’s more specific interests. Thus, the three chapter foci – value of IS, measuring IS success, and benchmarking IS - are not mutually exclusive, but, rather, each subsequent focus is in most respects a sub-set of the former. Parts 1&2, ‘the Value of IS’, take a broad view, with much emphasis on ‘the business Value of IS’, or the relationship between information technology and organizational performance. Part 3, ‘Information System Success Measurement’, focuses more specifically on measures and constructs employed in empirical research into the drivers of IS success (ISS). (DeLone and McLean 1992) inventoried and rationalized disparate prior measures of ISS into 6 constructs – System Quality, Information Quality, Individual Impact, Organizational Impact, Satisfaction and Use (later suggesting a 7th construct – Service Quality (DeLone and McLean 2003)). These 6 constructs have been used extensively, individually or in some combination, as the dependent variable in research seeking to better understand the important antecedents or drivers of IS Success. Part 3 reviews this body of work. Part 4, ‘Benchmarking Information Systems’, drills deeper again, focusing more specifically on a measure of the IS that can be used as a ‘benchmark’3. This section consolidates and extends the work of the author and his colleagues4 to derive a robust, validated IS-Impact measurement model for benchmarking contemporary Information Systems (IS). Though IS-Impact, like ISS, has potential value in empirical, causal research, its design and validation has emphasized its role and value as a comparator; a measure that is simple, robust and generalizable and which yields results that are as far as possible comparable across time, across stakeholders, and across differing systems and systems contexts.
Resumo:
In this paper, the placement and sizing of Distributed Generators (DG) in distribution networks are determined optimally. The objective is to minimize the loss and to improve the reliability. The constraints are the bus voltage, feeder current and the reactive power flowing back to the source side. The placement and size of DGs are optimized using a combination of Discrete Particle Swarm Optimization (DPSO) and Genetic Algorithm (GA). This increases the diversity of the optimizing variables in DPSO not to be stuck in the local minima. To evaluate the proposed algorithm, the semi-urban 37-bus distribution system connected at bus 2 of the Roy Billinton Test System (RBTS), which is located at the secondary side of a 33/11 kV distribution substation, is used. The results finally illustrate the efficiency of the proposed method.