935 resultados para Enterprise application integration (Computer systems)
Resumo:
The best results in the application of computer science systems to automatic translation are obtained in word processing when texts pertain to specific thematic areas, with structures well defined and a concise and limited lexicon. In this article we present a plan of systematic work for the analysis and generation of language applied to the field of pharmaceutical leaflet, a type of document characterized by format rigidity and precision in the use of lexicon. We propose a solution based in the use of one interlingua as language pivot between source and target languages; we are considering Spanish and Arab languages in this case of application.
Resumo:
2000 Mathematics Subject Classification: 62H15, 62P10.
Resumo:
The enterprise management (EM) approach provides a holistic view of organizations and their related information systems. In order to align information technology (IT) innovation with global markets and volatile virtualization, traditional firms are seeking to reconstruct their enterprise structures alongside repositioning strategy and establish new information system (IS) architectures to transform from single autonomous entities into more open enterprises supported by new Enterprise Resource Planning (ERP) systems. This chapter shows how ERP engage-abilities cater to three distinctive EM patterns and resultant strategies. The purpose is to examine the presumptions and importance of combing ERP and inter-firm relations relying on the virtual value chain concept. From a review of the literature on ERP development and enterprise strategy, exploratory inductive research studies in Zoomlion and Lanye have been conducted. In addition, the authors propose a dynamic conceptual framework to demonstrate the adoption and governance of ERP in the three enterprise management forms and points to a new architectural type (ERPIII) for operating in the virtual enterprise paradigm.
Resumo:
In the global Internet economy, e-business as a driving force to redefine business models and operational processes is posing new challenges for traditional organizational structures and information system (IS) architectures. These are showing promises of a renewed period of innovative thinking in e-business strategies with new enterprise paradigms and different Enterprise Resource Planning (ERP) systems. In this chapter, the authors consider and investigate how dynamic e-business strategies, as the next evolutionary generation of e-business, can be realized through newly diverse enterprise structures supported by ERP, ERPII and so-called "ERPIII" solutions relying on the virtual value chain concept. Exploratory inductive multi-case studies in manufacturing and printing industries have been conducted. Additionally, it proposes a conceptual framework to discuss the adoption and governance of ERP systems within the context of three enterprise forms for enabling dynamic and collaborative e-business strategies, and particularly demonstrate how an enterprise can dynamically migrate from its current position to the patterns it desires to occupy in the future - a migration that must and will include dynamic e-business as a core competency, but that also relies heavily on ERP-based backbone and other robust technological platform and applications.
Resumo:
Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
In his discussion - Database As A Tool For Hospitality Management - William O'Brien, Assistant Professor, School of Hospitality Management at Florida International University, O’Brien offers at the outset, “Database systems offer sweeping possibilities for better management of information in the hospitality industry. The author discusses what such systems are capable of accomplishing.” The author opens with a bit of background on database system development, which also lends an impression as to the complexion of the rest of the article; uh, it’s a shade technical. “In early 1981, Ashton-Tate introduced dBase 11. It was the first microcomputer database management processor to offer relational capabilities and a user-friendly query system combined with a fast, convenient report writer,” O’Brien informs. “When 16-bit microcomputers such as the IBM PC series were introduced late the following year, more powerful database products followed: dBase 111, Friday!, and Framework. The effect on the entire business community, and the hospitality industry in particular, has been remarkable”, he further offers with his informed outlook. Professor O’Brien offers a few anecdotal situations to illustrate how much a comprehensive data-base system means to a hospitality operation, especially when billing is involved. Although attitudes about computer systems, as well as the systems themselves have changed since this article was written, there is pertinent, fundamental information to be gleaned. In regards to the digression of the personal touch when a customer is engaged with a computer system, O’Brien says, “A modern data processing system should not force an employee to treat valued customers as numbers…” He also cautions, “Any computer system that decreases the availability of the personal touch is simply unacceptable.” In a system’s ability to process information, O’Brien suggests that in the past businesses were so enamored with just having an automated system that they failed to take full advantage of its capabilities. O’Brien says that a lot of savings, in time and money, went un-noticed and/or under-appreciated. Today, everyone has an integrated system, and the wise business manager is the business manager who takes full advantage of all his resources. O’Brien invokes the 80/20 rule, and offers, “…the last 20 percent of results costs 80 percent of the effort. But times have changed. Everyone is automating data management, so that last 20 percent that could be ignored a short time ago represents a significant competitive differential.” The evolution of data systems takes center stage for much of the article; pitfalls also emerge.
Resumo:
Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown virus quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives.
Resumo:
Category hierarchy is an abstraction mechanism for efficiently managing large-scale resources. In an open environment, a category hierarchy will inevitably become inappropriate for managing resources that constantly change with unpredictable pattern. An inappropriate category hierarchy will mislead the management of resources. The increasing dynamicity and scale of online resources increase the requirement of automatically maintaining category hierarchy. Previous studies about category hierarchy mainly focus on either the generation of category hierarchy or the classification of resources under a pre-defined category hierarchy. The automatic maintenance of category hierarchy has been neglected. Making abstraction among categories and measuring the similarity between categories are two basic behaviours to generate a category hierarchy. Humans are good at making abstraction but limited in ability to calculate the similarities between large-scale resources. Computing models are good at calculating the similarities between large-scale resources but limited in ability to make abstraction. To take both advantages of human view and computing ability, this paper proposes a two-phase approach to automatically maintaining category hierarchy within two scales by detecting the internal pattern change of categories. The global phase clusters resources to generate a reference category hierarchy and gets similarity between categories to detect inappropriate categories in the initial category hierarchy. The accuracy of the clustering approaches in generating category hierarchy determines the rationality of the global maintenance. The local phase detects topical changes and then adjusts inappropriate categories with three local operations. The global phase can quickly target inappropriate categories top-down and carry out cross-branch adjustment, which can also accelerate the local-phase adjustments. The local phase detects and adjusts the local-range inappropriate categories that are not adjusted in the global phase. By incorporating the two complementary phase adjustments, the approach can significantly improve the topical cohesion and accuracy of category hierarchy. A new measure is proposed for evaluating category hierarchy considering not only the balance of the hierarchical structure but also the accuracy of classification. Experiments show that the proposed approach is feasible and effective to adjust inappropriate category hierarchy. The proposed approach can be used to maintain the category hierarchy for managing various resources in dynamic application environment. It also provides an approach to specialize the current online category hierarchy to organize resources with more specific categories.
Resumo:
Interacting with a computer system in the operating room (OR) can be a frustrating experience for a surgeon, who currently has to verbally delegate to an assistant every computer interaction task. This indirect mode of interaction is time consuming, error prone and can lead to poor usability of OR computer systems. This thesis describes the design and evaluation of a joystick-like device that allows direct surgeon control of the computer in the OR. The device was tested extensively in comparison to a mouse and delegated dictation with seven surgeons, eleven residents, and five graduate students. The device contains no electronic parts, is easy to use, is unobtrusive, has no physical connection to the computer and makes use of an existing tool in the OR. We performed a user study to determine its effectiveness in allowing a user to perform all the tasks they would be expected to perform on an OR computer system during a computer-assisted surgery. Dictation was found to be superior to the joystick in qualitative measures, but the joystick was preferred over dictation in user satisfaction responses. The mouse outperformed both joystick and dictation, but it is not a readily accepted modality in the OR.
Resumo:
Collaboration in the public sector is imperative to achieve e-government objectives such as improved efficiency and effectiveness of public administration and improved quality of public services. Collaboration across organizational and institutional boundaries requires public organizations to share e-government systems and services through for instance, interoperable information technology and processes. Demands on public organizations to become more open also require that public organizations adopt new collaborative approaches for inviting and engaging citizens in governmental activities. E-government related collaboration in the public sector is challenging, however, and collaboration initiatives often fail. Public organizations need to learn how to collaborate since forms of e-government collaboration and expected outcomes are mostly unknown. How public organizations can collaborate and the expected outcomes are thus investigated in this thesis by studying multiple collaboration cases on the acquisition and implementation of a particular e-government investment (digital archive). This thesis also investigates how e-government collaboration can be facilitated through artifacts. It is done through a case study, where objects that cross boundaries between collaborating communities in the public sector are studied, and by designing a configurable process model integrating several processes for social services. By using design science, this thesis also investigates how an m-government solution that facilitates collaboration between citizens and public organizations can be designed. The thesis contributes to literature through describing five different modes of interorganizational collaboration in the public sector and the expected benefits from each mode. It also contributes with an instantiation of a configurable process model supporting three open social e-services and with evidence of how it can facilitate collaboration. This thesis further describes how boundary objects facilitate collaboration between different communities in an open government design initiative. It contributes with a designed mobile government solution, thereby providing proof of concept and initial design implications for enabling collaboration with citizens through citizen sourcing (outsourcing a governmental activity to citizens through an open call). This thesis also identifies research streams within e-government collaboration research through a literature review and the thesis contributions are related to the identified research streams. This thesis gives directions for future research by suggesting that future research should focus further on understanding e-government collaboration and how information and communication technology can facilitate collaboration in the public sector. It is suggested that further research should investigate m-government solutions to form design theories. Future research should also examine how value can be co-created in e-government collaboration.
Resumo:
This thesis explores aesthetization in general and fashion in particular in digital technology design and how we can design digital technology to account for the extended influences of fashion. The thesis applies a combination of methods to explore the new design space at the intersection of fashion and technology. First, it contributes to theoretical understandings of aesthetization and fashion institutionalization that influence digital technology design. We show that there is an unstable aesthetization in mobile design and the increased aesthetization is closely related to the fashion industry. Fashion emerged through shared institutional activities, which are usually in the form of action nets in the design of digital devices. “Tech Fashion” is proposed to interpret such dynamic action nets of institutional arrangements that make digital technology fashionable and desirable. Second, through associative design research, we have designed and developed two prototypes that account for institutionalized fashion values, such as the concept “outfit-centric accessory.” We call for a more extensive collaboration between fashion design and interaction design.
Resumo:
This paper presents a multi-class AdaBoost based on incorporating an ensemble of binary AdaBoosts which is organized as Binary Decision Tree (BDT). It is proved that binary AdaBoost is extremely successful in producing accurate classification but it does not perform very well for multi-class problems. To avoid this performance degradation, the multi-class problem is divided into a number of binary problems and binary AdaBoost classifiers are invoked to solve these classification problems. This approach is tested with a dataset consisting of 6500 binary images of traffic signs. Haar-like features of these images are computed and the multi-class AdaBoost classifier is invoked to classify them. A classification rate of 96.7% and 95.7% is achieved for the traffic sign boarders and pictograms, respectively. The proposed approach is also evaluated using a number of standard datasets such as Iris, Wine, Yeast, etc. The performance of the proposed BDT classifier is quite high as compared with the state of the art and it converges very fast to a solution which indicates it as a reliable classifier.