987 resultados para software implementation
Resumo:
Kia Motors Corporation (KMC) tiene como objetivo desde hace algunos años, la creación e implementación de una solución de negocios enfocada en una gestión empresarial estandarizada a todos los distribuidores de Kia a nivel latinoamericano: Colombia, Perú, Ecuador y Chile. El proceso actual con el que cuentan los distribuidores en América Latina con sus concesionarios es enviar toda la información relacionada con los estatutos financieros a través de correo electrónico junto con una base de datos física, la cual se va archivando. El proceso es manual siendo de mucha dedicación y tiempo requerido para cumplir con las funciones pedidas. El enfoque actual de este proceso es claro: analizar el desempeño y rendimiento de cada uno de los concesionarios de la red junto con la identificación de oportunidades para mejorar. KMC junto con todos sus distribuidores están interesados en buscar un sistema de gestión empresarial sencillo, adecuado y de fácil manejo que permitirá únicamente a todos los concesionarios presentar sus estados de cuenta y desarrollo de una manera estandarizada a su distribuidor directamente. Entonces, el sistema deseado debe ser capaz de generar resultados basándose en lo comunicado por los distribuidores y proporcionar un conjunto de características bajo una adecuada funcionalidad para permitir a todos los usuarios de la red (concesionarios, distribuidores y KMC) analizar el rendimiento y desempeño de la empresa e identificar las áreas que requieren una mejora. En el siguiente documento, podremos ver el desarrollo que ha tenido METROKIA S.A para la creación y aplicación de una herramienta tecnológica (software) enfocada en lo mencionado anteriormente. Ha sido un proceso de varias etapas en donde tanto las variables como los indicadores de desempeño han tenido correcciones con el fin de poder ser leídos y entendidos fácilmente por toda la organización y red de concesionarios afiliados.
Resumo:
Resumen tomado de la publicación
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Experimental comparison of the comprehensibility of a Z specification and its implementation in Java
Resumo:
Comprehensibility is often raised as a problem with formal notations, yet formal methods practitioners dispute this. In a survey, one interviewee said 'formal specifications are no more difficult to understand than code'. Measurement of comprehension is necessarily comparative and a useful comparison for a specification is against its implementation. Practitioners have an intuitive feel for the comprehension of code. A quantified comparison will transfer this feeling to formal specifications. We performed an experiment to compare the comprehension of a Z specification with that of its implementation in Java. The results indicate there is little difference in comprehensibility between the two. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We describe a compositional framework, together with its supporting toolset, for hardware/software co-design. Our framework is an integration of a formal approach within a traditional design flow. The formal approach is based on Interval Temporal Logic and its executable subset, Tempura. Refinement is the key element in our framework because it will derive from a single formal specification of the system the software and hardware parts of the implementation, while preserving all properties of the system specification. During refinement simulation is used to choose the appropriate refinement rules, which are applied automatically in the HOL system. The framework is illustrated with two case studies. The work presented is part of a UK collaborative research project between the Software Technology Research Laboratory at the De Montfort University and the Oxford University Computing Laboratory.
Resumo:
Much consideration is rightly given to the design of metadata models to describe data. At the other end of the data-delivery spectrum much thought has also been given to the design of geospatial delivery interfaces such as the Open Geospatial Consortium standards, Web Coverage Service (WCS), Web Map Server and Web Feature Service (WFS). Our recent experience with the Climate Science Modelling Language shows that an implementation gap exists where many challenges remain unsolved. To bridge this gap requires transposing information and data from one world view of geospatial climate data to another. Some of the issues include: the loss of information in mapping to a common information model, the need to create ‘views’ onto file-based storage, and the need to map onto an appropriate delivery interface (as with the choice between WFS and WCS for feature types with coverage-valued properties). Here we summarise the approaches we have taken in facing up to these problems.
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
Há mais de 30 anos o Brasil tem desenvolvido políticas específicas para o setor de informática, desde a Política Nacional de Informática da década de 70, passando pelo Período de Reserva de Mercado dos anos 80 e, nos dias de hoje, em que as Tecnologias de Informação e Comunicação (TIC) são tidas como uma das áreas prioritárias na Política Industrial. Dentre as metas atuais, destaca-se o foco na ampliação do volume de exportações de software e serviços. Contudo, apesar dessas pretensões, o país não tem tido destaque internacional expressivo para o setor. Por outro lado, a Índia, também considerada como um país emergente, figurando na lista dos BRIC, foi responsável pela exportação de cerca de US$47 bilhões em software e serviços de Tecnologia da Informação (TI) em 2009, se destacando como um país protagonista no mercado internacional do setor. A implementação de uma indústria tecnicamente sofisticada como a do software, que exige um ambiente propício à inovação, em um país em desenvolvimento como a Índia chama a atenção. De certo existiram arranjos jurídico-institucionais que foram utilizados naquele país. Quais? Em que medida tais arranjos ajudaram no desenvolvimento indiano do setor? E no Brasil? Este trabalho parte da hipótese de que o ambiente jurídico-institucional desses países definiu fluxos de conhecimento distintos, influenciando o tipo de desenvolvimento do setor de software de cada um. Averiguar como, entre outros fatores sócio-econômicos, esses arranjos jurídico-institucionais influenciaram na conformação diversa de fluxos de conhecimento é o objetivo específico desta pesquisa. Entende-se aqui como ambiente jurídico-institucional todas as regulamentações que estabelecem instituições, diretrizes e condições comuns para determinado tema. Partindo do pressuposto de que o setor de software desenvolve atividades intensivas em conhecimento, para cada país em questão, serão analisados apenas arranjos jurídico-institucionais que tiveram, ou têm, poder de delimitar o fluxo de conhecimento referente ao setor, sejam eles provenientes de políticas comerciais (de exportação e importação, ou de propriedade intelectual) ou de políticas de investimento para inovação. A questão fundamental ultrapassa o debate se o Estado deve ou não intervir, para focar-se na análise sobre os diferentes tipos de envolvimento observados e quais os seus efeitos. Para tal, além de revisão bibliográfica, foi feita uma pesquisa de campo na Índia (Delhi, Mumbai, Bangalore) e no Brasil (São Paulo, Brasília e Rio de Janeiro), onde foram conduzidas entrevistas com empresas e associações de software, gestores públicos e acadêmicos que estudam o setor.
Resumo:
Generalized hyper competitiveness in the world markets has determined the need to offer better products to potential and actual clients in order to mark an advantagefrom other competitors. To ensure the production of an adequate product, enterprises need to work on the efficiency and efficacy of their business processes (BPs) by means of the construction of Interactive Information Systems (IISs, including Interactive Multimedia Documents) so that they are processed more fluidly and correctly.The construction of the correct IIS is a major task that can only be successful if the needs from every intervenient are taken into account. Their requirements must bedefined with precision, extensively analyzed and consequently the system must be accurately designed in order to minimize implementation problems so that the IIS isproduced on schedule and with the fewer mistakes as possible. The main contribution of this thesis is the proposal of Goals, a software (engineering) construction process which aims at defining the tasks to be carried out in order to develop software. This process defines the stakeholders, the artifacts, and the techniques that should be applied to achieve correctness of the IIS. Complementarily, this process suggests two methodologies to be applied in the initial phases of the lifecycle of the Software Engineering process: Process Use Cases for the phase of requirements, and; MultiGoals for the phases of analysis and design. Process Use Cases is a UML-based (Unified Modeling Language), goal-driven and use case oriented methodology for the definition of functional requirements. It uses an information oriented strategy in order to identify BPs while constructing the enterprise’s information structure, and finalizes with the identification of use cases within the design of these BPs. This approach provides a useful tool for both activities of Business Process Management and Software Engineering. MultiGoals is a UML-based, use case-driven and architectural centric methodology for the analysis and design of IISs with support for Multimedia. It proposes the analysis of user tasks as the basis of the design of the: (i) user interface; (ii) the system behaviour that is modeled by means of patterns which can combine Multimedia and standard information, and; (iii) the database and media contents. This thesis makes the theoretic presentation of these approaches accompanied with examples from a real project which provide the necessary support for the understanding of the used techniques.
Resumo:
New technologies appear each moment and its use can result in countless benefits for that they directly use and for all the society as well. In this direction, the State also can use the technologies of the information and communication to improve the level of rendering of services to the citizens, to give more quality of life to the society and to optimize the public expense, centering it in the main necessities. For this, it has many research on politics of Electronic Government (e-Gov) and its main effect for the citizen and the society as a whole. This research studies the concept of Electronic Government and wishes to understand the process of implementation of Free Softwares in the agencies of the Direct Administration in the Rio Grande do Norte. Moreover, it deepens the analysis to identify if its implantation results in reduction of cost for the state treasury and intends to identify the Free Software participation in the Administration and the bases of the politics of Electronic Government in this State. Through qualitative interviews with technologies coordinators and managers in 3 State Secretaries it could be raised the ways that come being trod for the Government in order to endow the State with technological capacity. It was perceived that the Rio Grande do Norte still is an immature State in relation to practical of electronic government (e-Gov) and with Free Softwares, where few agencies have factual and viable initiatives in this area. It still lacks of a strategical definition of the paper of Technology and more investments in infrastructure of staff and equipment. One also observed advances as the creation of the normative agency, the CETIC (State Advice of Technology of the Information and Communication), the Managing Plan of Technology that provide a necessary diagnosis with the situation how much Technology in the State and considered diverse goals for the area, the accomplishment of a course of after-graduation for managers of Technology and the training in BrOffice (OppenOffice) for 1120 public servers
Resumo:
Because of social exclusion in Brazil and having as focus the digital inclusion, was started in Federal University of Rio Grande do Norte a project that could talk, at the same time, about concepts of collaborative learning and educational robotics , focused on children digitally excluded. In this context was created a methodology that approaches many subjects as technological elements (e. g. informatics and robotics) and school subjects (e. g. Portuguese, Mathematics, Geography, History), contextualized in everyday situations. We observed educational concepts of collaborative learning and the development of capacities from those students, as group work, logical knowledge and learning ability. This paper proposes an educational software for robotics teaching called RoboEduc, created to be used by children digitally excluded from primary school. Its introduction prioritizes a friendly interface, that makes the concepts of robotics and programming easy and fun to be taught. With this new tool, users without informatics or robotics previous knowledge are able to control a robot, previously set with Lego kits, or even program it to carry some activities out. This paper provides the implementation of the second version of the software. This version presents the control of the robot already used. After were implemented the different levels of programming linked to the many learning levels of the users and their different interfaces and functions. Nowadays, has been implemented the third version, with the improvement of each one of the mentioned stages. In order to validate, prove and test the efficience of the developed methodology to the RoboEduc, were made experiments, through practice of robotics, with children for fourth and fifth grades of primary school at the City School Professor Ascendino de Almeida, in the suburb of Natal (west zone), Rio Grande do Norte. As a preliminary result of the current technology, we verified that the use of robots associated with a well elaborated software can be spread to users that know very little about the subject, without the necessity of previous advanced technology knowledges. Therefore, they showed to be accessible and efficient tools in the process of digital inclusion
Resumo:
This dissertation aims to develop a software applied to a communication system for a wireless sensor network (WSN) for tracking analog and digital variables and control valve of the gas flow in artificial oil s elevation units, Plunger Lift type. The reason for this implementation is due to the fact that, in the studied plant configuration, the sensors communicate with the PLC (Programmable and Logic Controller) by the cables and pipelines, making any changes in that system, such as changing the layout of it, as well as inconveniences that arise from the nature of the site, such as the vicinity s animals presence that tend to destroy the cables for interconnection of sensors to the PLC. For software development, was used communication polling method via SMAC protocol (Simple Medium Access ControlIEEE 802.15.4 standard) in the CodeWarrior environment to which generated a firmware, loaded into the WSN s transceivers, present in the kit MC13193-EVK, (all items described above are owners of Freescale Semiconductors Inc.). The network monitoring and parameterization used in its application, was developed in LabVIEW software from National Instruments. The results were obtained through the observation of the network s behavior of sensors proposal, focusing on aspects such as: indoor and outdoor quantity of packages received and lost, general aspects of reliability in data transmission, coexistence with other types of wireless networks and power consumption under different operating conditions. The results were considered satisfactory, which showed the software efficiency in this communication system
Resumo:
We propose in this work a software architecture for robotic boats intended to act in diverse aquatic environments, fully autonomously, performing telemetry to a base station and getting this mission to be accomplished. This proposal aims to apply within the project N-Boat Lab NatalNet DCA, which aims to empower a sailboat navigating autonomously. The constituent components of this architecture are the memory modules, strategy, communication, sensing, actuation, energy, security and surveillance, making these systems the boat and base station. To validate the simulator was developed in C language and implemented using the graphics API OpenGL resources, whose main results were obtained in the implementation of memory, performance and strategy modules, more specifically data sharing, control of sails and rudder and planning short routes based on an algorithm for navigation, respectively. The experimental results, shown in this study indicate the feasibility of the actual use of the software architecture developed and their application in the area of autonomous mobile robotics