947 resultados para Factory of software
Resumo:
Яни Чаушев, Милослав Средков, Красимир Манев - Всяко състезание по програмиране използва множество софтуерни инструменти за управление на процесите по време на състезанието. Въпреки че тези инструменти обикновено покриват спецификите на конкретния вид състезание задоволително, те рядко адресират трудностите на дългосрочното съхранение на данни и оперативната съвместимост. В тази статия е представен един софтуерен инструмент адресиращ тези проблеми. Вместо комплексна система, касаеща всички аспекти на състезанието, CORE е централизирано хранилище за съхраняване и поддържане на необходимите за различни състезания данни. Представени са основните му елементи, текущото състояние на реализацията и перспективите за развитие на системата.
Resumo:
ACM Computing Classification System (1998): D.2.11, D.1.3, D.3.1, J.3, C.2.4.
Resumo:
The growth of the discipline of translation studies has been accompanied by are newed reflection on the object of research and our metalanguage. These developments have also been necessitated by the diversification of professions within the language industry. The very label translation is often avoided in favour of alternative terms, such as localisation (of software), trans creation (of advertising), trans editing (of information from press agencies). The competences framework developed for the European Master’s in Translation network speaks of experts in multilingual and multimedia communication to account for the complexity of translation competence. This paper addresses the following related questions: (i) How can translation competence in such awide sense be developed in training programmes? (ii) Do some competences required in the industry go beyond translation competence? and (iii) What challenges do labels such as trans creation pose?
Resumo:
One of the reasons for using variability in the software product line (SPL) approach (see Apel et al., 2006; Figueiredo et al., 2008; Kastner et al., 2007; Mezini & Ostermann, 2004) is to delay a design decision (Svahnberg et al., 2005). Instead of deciding on what system to develop in advance, with the SPL approach a set of components and a reference architecture are specified and implemented (during domain engineering, see Czarnecki & Eisenecker, 2000) out of which individual systems are composed at a later stage (during application engineering, see Czarnecki & Eisenecker, 2000). By postponing the design decisions in such a manner, it is possible to better fit the resultant system in its intended environment, for instance, to allow selection of the system interaction mode to be made after the customers have purchased particular hardware, such as a PDA vs. a laptop. Such variability is expressed through variation points which are locations in a software-based system where choices are available for defining a specific instance of a system (Svahnberg et al., 2005). Until recently it had sufficed to postpone committing to a specific system instance till before the system runtime. However, in the recent years the use and expectations of software systems in human society has undergone significant changes.Today's software systems need to be always available, highly interactive, and able to continuously adapt according to the varying environment conditions, user characteristics and characteristics of other systems that interact with them. Such systems, called adaptive systems, are expected to be long-lived and able to undertake adaptations with little or no human intervention (Cheng et al., 2009). Therefore, the variability now needs to be present also at system runtime, which leads to the emergence of a new type of system: adaptive systems with dynamic variability.
Resumo:
Two classes of software that are notoriously difficult to develop on their own are rapidly merging into one. This will affect every key service that we rely upon in modern society, yet a successful merge is unlikely to be achievable using software development techniques specific to either class. This paper explains the growing demand for software capable of both self-adaptation and high integrity, and advocates the use of a collection of "@runtime" techniques for its development, operation and management. We summarise early research into the development of such techniques, and discuss the remaining work required to overcome the great challenge of self-adaptive high-integrity software. © 2011 ACM.
Resumo:
Today, many organizations are turning to new approaches to building and maintaining information systems (I/S) to cope with a highly competitive business environment. Current anecdotal evidence indicates that the approaches being used improve the effectiveness of software development by encouraging active user participation throughout the development process. Unfortunately, very little is known about how the use of such approaches enhances the ability of team members to develop I/S that are responsive to changing business conditions.^ Drawing from predominant theories of organizational conflict, this study develops and tests a model of conflict among members of a development team. The model proposes that development approaches provide the relevant context conditioning the management and resolution of conflict in software development which, in turn, are crucial for the success of the development process.^ Empirical testing of the model was conducted using data collected through a combination of interviews with I/S executives and surveys of team members and business users at nine organizations. Results of path analysis provide support for the model's main prediction that integrative conflict management and distributive conflict management can contribute to I/S success by influencing differently the manifestation and resolution of conflict in software development. Further, analyses of variance indicate that object-oriented development, when compared to rapid and structured development, appears to produce the lowest levels of conflict management, conflict resolution, and I/S success.^ The proposed model and findings suggest academic implications for understanding the effects of different conflict management behaviors on software development outcomes, and practical implications for better managing the software development process, especially in user-oriented development environments. ^
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.
Resumo:
Software development guidelines are a set of rules which can help improve the quality of software. These rules are defined on the basis of experience gained by the software development community over time. This paper discusses a set of design guidelines for model-based development of complex real-time embedded software systems. To be precise, we propose nine design conventions, three design patterns and thirteen antipatterns for developing UML-RT models. These guidelines have been identified based on our analysis of around 100 UML-RT models from industry and academia. Most of the guidelines are explained with the help of examples, and standard templates from the current state of the art are used for documenting the design rules.
Resumo:
El artículo analiza la figura del prosumidor desde los estudios visuales a partir de la combinación de la teoría de los actos de habla y los nuevos medios. El objetivo es evaluar si la distinción entre productores y consumidores, estrategias y tácticas de Michel de Certeau continúa siendo operativa en las interfaces gráficas de la cultura global de la información de Scott Lash. Para ello distingue dos tipos de performatividad de los actos de habla: la performatividad top-down del software, y la bottom-up de los juegos del lenguaje y las formas de vida. Estos tipos se aplican al análisis del discurso de los eslóganes que aparecen en los sitios web de las iniciativas “open” y de economía colaborativa, ya que las primeras están dedicadas a la producción de bienes inmateriales y las segundas a la producción de bienes materiales. El desarrollo muestra cómo los dos tipos de performatividad transforman el análisis textual de los estudios literarios y cinematográficos en una metodología capaz de investigar acciones materiales, humanas y no humanas. Las conclusiones describen el surgimiento de nuevas convenciones narrativas de poder y control ajenas a la ficción que apuntan a una “DIY society”.
Resumo:
Gracias al crecimiento, expansión y popularización de la World Wide Web, su desarrollo tecnológico tiene una creciente importancia en la sociedad. La simbiosis que protagonizan estos dos entornos ha propiciado una mayor influencia social en las innovaciones de la plataforma y un enfoque mucho más práctico. Nuestro objetivo en este artículo es describir, caracterizar y analizar el surgimiento y difusión del nuevo estándar de hipertexto que rige la Web; HTML5. Al mismo tiempo exploramos este proceso a la luz de varias teorías que aúnan tecnología y sociedad. Dedicamos especial atención a los usuarios de la World Wide Web y al uso genérico que realizan de los Medios Sociales o "Social Media". Sugerimos que el desarrollo de los estándares web está influenciado por el uso cotidiano de este nuevo tipo de tecnologías y aplicaciones.
Resumo:
This deliverable is software, as such this document is abridged to be as succinct as possible, the extended descriptions and detailed documentation for the software are online. The document consists of two parts, part one describes the first bundle of social gamification assets developed in WP3, part two presents mock-ups of the RAGE ecosystem gamification. In addition to the software outline, included in part one is a short market analysis of existing gamification solutions, outline rationale for combining the three social gamification assets into one unified asset, and the branding exercise to make the assets more developer friendly.Online links to the source code, binaries, demo and documentation for the assets are provided. The combined assets offer game developers as well as a wide range of software developers the opportunity to readily enhance existing games or digital platforms with multiplayer gamification functionalities, catering for both competitive and cooperative game dynamics. The solution consist of a flexible client-server solution which can run either as a cloud-based service, serving many games or have specific instances for individual games as necessary.
Resumo:
This deliverable (D1.4) is an intermediate document, expressly included to inform the first project review about RAGE’s methodology of software asset creation and management. The final version of the methodology description (D1.1) will be delivered in Month 29. The document explains how the RAGE project defines, develops, distributes and maintains a series of applied gaming software assets that it aims to make available. It describes a high-level methodology and infrastructure that are needed to support the work in the project as well as after the project has ended.