98 resultados para requirement specification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project was developed within the ART-WiSe framework of the IPP-HURRAY group (http://www.hurray.isep.ipp.pt), at the Polytechnic Institute of Porto (http://www.ipp.pt). The ART-WiSe – Architecture for Real-Time communications in Wireless Sensor networks – framework (http://www.hurray.isep.ipp.pt/art-wise) aims at providing new communication architectures and mechanisms to improve the timing performance of Wireless Sensor Networks (WSNs). The architecture is based on a two-tiered protocol structure, relying on existing standard communication protocols, namely IEEE 802.15.4 (Physical and Data Link Layers) and ZigBee (Network and Application Layers) for Tier 1 and IEEE 802.11 for Tier 2, which serves as a high-speed backbone for Tier 1 without energy consumption restrictions. Within this trend, an application test-bed is being developed with the objectives of implementing, assessing and validating the ART-WiSe architecture. Particularly for the ZigBee protocol case; even though there is a strong commercial lobby from the ZigBee Alliance (http://www.zigbee.org), there is neither an open source available to the community for this moment nor publications on its adequateness for larger-scale WSN applications. This project aims at fulfilling these gaps by providing: a deep analysis of the ZigBee Specification, mainly addressing the Network Layer and particularly its routing mechanisms; an identification of the ambiguities and open issues existent in the ZigBee protocol standard; the proposal of solutions to the previously referred problems; an implementation of a subset of the ZigBee Network Layer, namely the association procedure and the tree routing on our technological platform (MICAz motes, TinyOS operating system and nesC programming language) and an experimental evaluation of that routing mechanism for WSNs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typically common embedded systems are designed with high resource constraints. Static designs are often chosen to address very specific use cases. On contrast, a dynamic design must be used if the system must supply a real-time service where the input may contain factors of indeterminism. Thus, adding new functionality on these systems is often accomplished by higher development time, tests and costs, since new functionality push the system complexity and dynamics to a higher level. Usually, these systems have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to the external applications, usually using a run-time monitoring mechanism. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technology plays a double role in Education: it can act as a facilitator in the teaching/learning process and it can be the very subject of that process in Science & Engineering courses. This is especially true when students perform laboratory activities where they interact with equipment and objects under experimentation. In this context, technology can also play a facilitator role if it allows students to perform experiments in a remote fashion, through the Internet, in a so-called weblab or remote laboratory. No doubt, the Internet has been revolutionizing the educational process in many aspects, and it can be stated that remote laboratories are just an angle of that on-going revolution. As any other educational tool or resource, the i) pedagogical approach and the ii) technology used in the development of a remote laboratory can dictate its general success or its ephemeral existence. By pedagogical approach we consider the way remote experiments address the process by which students acquire experimental skills and link experimental results to theoretical concepts. In respect to technology, we discuss different specification and implementation alternatives, to show the case where the adoption of a family of standards would positively contribute to a larger acceptance and utilization of remote laboratories, and also to a wider collaboration in their development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Basel III will have a significant impact on the European banking sector. In September 2010, supervisors of various countries adopted the new rules proposed by the prudential Committee on Banking Supervision to be applied to the business of credit institutions (hereinafter called ICs) in a phased manner from 2013, assuming to its full implementation by 2019. The purpose of this new regulation is to limit the excessive risk that these institutions took on the period preceding the global financial crisis of 2008. This new regulation is known in slang by Basel III. Depending on the requirement of Basel II for banks and their supervisors to assess the soundness and adequacy of internal risk measurement and credit management systems, the development of methodologies for the validation of internal and external evaluation systems is clearly an important issue . More specifically, there is a need to develop tools to validate the systems used to generate the parameters (such as PD, LGD, EAD and ratings of perceived risk) that serve as starting points for the IRB approach for credit risk. In this context, the work is composed of a number of approaches and tools used to evaluate the robustness of these elements IRB systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artigo apresenta uma nova abordagem (MM-GAV-FBI), aplicável ao problema da programação de projectos com restrições de recursos e vários modos de execução por actividade, problema conhecido na literatura anglo-saxónica por MRCPSP. Cada projecto tem um conjunto de actividades com precedências tecnológicas definidas e um conjunto de recursos limitados, sendo que cada actividade pode ter mais do que um modo de realização. A programação dos projectos é realizada com recurso a um esquema de geração de planos (do inglês Schedule Generation Scheme - SGS) integrado com uma metaheurística. A metaheurística é baseada no paradigma dos algoritmos genéticos. As prioridades das actividades são obtidas a partir de um algoritmo genético. A representação cromossómica utilizada baseia-se em chaves aleatórias. O SGS gera planos não-atrasados. Após a obtenção de uma solução é aplicada uma melhoria local. O objectivo da abordagem é encontrar o melhor plano (planning), ou seja, o plano que tenha a menor duração temporal possível, satisfazendo as precedências das actividades e as restrições de recursos. A abordagem proposta é testada num conjunto de problemas retirados da literatura da especialidade e os resultados computacionais são comparados com outras abordagens. Os resultados computacionais validam o bom desempenho da abordagem, não apenas em termos de qualidade da solução, mas também em termos de tempo útil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a low-cost scaled model of a silo for drying and airing cereal grains. It allows the control and monitor of several parameters associated to the silo's operation, through a remote accessible infrastructure. The scaled model consists of a 2.50 m wide × 2.10 m long plant with all control and monitor capacities provided by micro-Web servers. An application running on the micro-Web servers enables storing all parameters in a data basis for later analysis. The implemented model aims to support a remote experimentation facility for technological education, research-oriented tutorials, and industrial applications. Given the low-cost requirement, this remote facility can be easily replicated in other institutions to support a network of remote labs, which encompasses the concurrent access of several users (e.g. students).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In these days the learning experience is no longer confined within the four walls of a classroom. Computers and primarily the internet have broadened this horizon by creating a way of delivering education that is known as e-learning. In the meantime, the internet, or more precisely, the Web is heading towards a new paradigm where the user is no longer just a consumer of information and becomes an active part in the communication. This two-way channel where the user takes the role of the producer of content triggered the appearance of new types of services such as Social Networks, Blogs and Wikis. To seize this second generation of communities and services, educational vendors are willing to develop e-learning systems focused on the new and emergent users needs. This paper describes the analysis and specification of an e-learning environment at our School (ESEIG) towards this new Web generation, called PEACE – Project for ESEIG Academic Environment. This new model relies on the integration of several services controlled by teachers and students such as social networks, repositories libraries, e-portfolios and e-conference sytems, intelligent tutors, recommendation systems, automatic evaluators, virtual classrooms and 3D avatars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Text file evaluation is an emergent topic in e-learning that responds to the shortcomings of the assessment based on questions with predefined answers. Questions with predefined answers are formalized in languages such as IMS Question & Test Interoperability Specification (QTI) and supported by many e-learning systems. Complex evaluation domains justify the development of specialized evaluators that participate in several business processes. The goal of this paper is to formalize the concept of a text file evaluation in the scope of the E-Framework – a service oriented framework for development of e-learning systems maintained by a community of practice. The contribution includes an abstract service type and a service usage model. The former describes the generic capabilities of a text file evaluation service. The later is a business process involving a set of services such as repositories of learning objects and learning management systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the environmental monitoring / regatta beacon buoy under development at the Laboratory of Autonomous Systems (LSA) of the Polytechnic Institute of Porto. On the one hand, environmentalmonitoring of open water bodies in real or deferred time is essential to assess and make sensible decisions and, on the other hand, the broadcast in real time of position, water and wind related parameters allows autonomous boats to optimise their regatta performance. This proposal, rather than restraining the boats autonomy, fosters the development of intelligent behaviour by allowing the boats to focus on regatta strategy and tactics. The Nautical and Telemetric Application (NAUTA) buoy is a dual mode reconfigurable system that includes communications, control, data logging, sensing, storage and power subsystems. In environmental monitoring mode, the buoy gathers and stores data from several underwater and above water sensors and, in regatta mode, the buoy becomes an active course mark for the autonomous sailing boats in the vicinity. During a race, the buoy broadcasts its position, together with the wind and the water current local conditions, allowing autonomous boats to navigate towards and round the mark successfully. This project started with the specification of the requirements of the dual mode operation, followed by the design and building of the buoy structure. The research is currently focussed on the development of the modular, reconfigurable, open source-based control system. The NAUTA buoy is innovative, extensible and optimises the on board platform resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing adaptive educational hypermedia systems have been using learning resources sequencing approaches in order to enrich the learning experience. In this context, educational resources, either expository or evaluative, play a central role. However, there is a lack of tools that support sequencing essentially due to the fact that existing specifications are complex. This paper presents Seqins as a sequencing tool of digital educational resources. Seqins includes a simple and flexible sequencing model that will foster heterogeneous students to learn at different rhythms. The tool communicates through the IMS Learning Tools Interoperability specification with a plethora of e-learning systems such as learning management systems, repositories, authoring and automatic evaluation systems. In order to validate Seqins we integrate it in an e-learning Ensemble framework instance for the computer programming learning domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Composition is a practice of key importance in software engineering. When real-time applications are composed, it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface typically contains information about the amount of computing capacity needed by the application. For multiprocessor platforms, the interface should also present information about the degree of parallelism. Several interface proposals have recently been put forward in various research works. However, those interfaces are either too complex to be handled or too pessimistic. In this paper we propose the generalized multiprocessor periodic resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We then derive a method to compute the interface from the application specification. This method has been implemented in Matlab routines that are publicly available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of mathematical methods and computer algorithms in the analysis of economic and financial data series aims to give empirical descriptions of the hidden relations between many complex or unknown variables and systems. This strategy overcomes the requirement for building models based on a set of ‘fundamental laws’, which is the paradigm for studying phenomena usual in physics and engineering. In spite of this shortcut, the fact is that financial series demonstrate to be hard to tackle, involving complex memory effects and a apparently chaotic behaviour. Several measures for describing these objects were adopted by market agents, but, due to their simplicity, they are not capable to cope with the diversity and complexity embedded in the data. Therefore, it is important to propose new measures that, on one hand, are highly interpretable by standard personal but, on the other hand, are capable of capturing a significant part of the dynamical effects.