17 resultados para leaning environments
em Instituto Politécnico do Porto, Portugal
Resumo:
This chapter addresses the resolution of scheduling in manufacturing systems subject to perturbations. The planning of Manufacturing Systems involves frequently the resolution of a huge amount and variety of combinatorial optimisation problems with an important impact on the performance of manufacturing organisations. Examples of those problems are the sequencing and scheduling problems in manufacturing management, routing and transportation, layout design and timetabling problems.
Resumo:
This paper presents the proposal of an architecture for developing systems that interact with Ambient Intelligence (AmI) environments. This architecture has been proposed as a consequence of a methodology for the inclusion of Artificial Intelligence in AmI environments (ISyRAmI - Intelligent Systems Research for Ambient Intelligence). The ISyRAmI architecture considers several modules. The first is related with the acquisition of data, information and even knowledge. This data/information knowledge deals with our AmI environment and can be acquired in different ways (from raw sensors, from the web, from experts). The second module is related with the storage, conversion, and handling of the data/information knowledge. It is understood that incorrectness, incompleteness, and uncertainty are present in the data/information/knowledge. The third module is related with the intelligent operation on the data/information/knowledge of our AmI environment. Here we include knowledge discovery systems, expert systems, planning, multi-agent systems, simulation, optimization, etc. The last module is related with the actuation in the AmI environment, by means of automation, robots, intelligent agents and users.
Resumo:
As polycyclic aromatic hydrocarbons (PAHs) have a negative impact on human health due to their mutagenic and/or carcinogenic properties, the objective of this work was to study the influence of tobacco smoke on levels and phase distribution of PAHs and to evaluate the associated health risks. The air samples were collected at two homes; 18 PAHs (the 16 PAHs considered by U.S. EPA as priority pollutants, dibenzo[a,l]pyrene and benzo[j]fluoranthene) were determined in gas phase and associated with thoracic (PM10) and respirable (PM2.5) particles. At home influenced by tobacco smoke the total concentrations of 18 PAHs in air ranged from 28.3 to 106 ngm 3 (mean of 66.7 25.4 ngm 3),∑PAHs being 95% higher than at the non-smoking one where the values ranged from 17.9 to 62.0 ngm 3 (mean of 34.5 16.5 ngm 3). On average 74% and 78% of ∑PAHs were present in gas phase at the smoking and non-smoking homes, respectively, demonstrating that adequate assessment of PAHs in air requires evaluation of PAHs in both gas and particulate phases. When influenced by tobacco smoke the health risks values were 3.5e3.6 times higher due to the exposure of PM10. The values of lifetime lung cancer risks were 4.1 10 3 and 1.7 10 3 for the smoking and nonsmoking homes, considerably exceeding the health-based guideline level at both homes also due to the contribution of outdoor traffic emissions. The results showed that evaluation of benzo[a]pyrene alone would probably underestimate the carcinogenic potential of the studied PAH mixtures; in total ten carcinogenic PAHs represented 36% and 32% of the gaseous ∑PAHs and in particulate phase they accounted for 75% and 71% of ∑PAHs at the smoking and non-smoking homes, respectively.
Resumo:
This paper focuses on evaluating the usability of an Intelligent Wheelchair (IW) in both real and simulated environments. The wheelchair is controlled at a high-level by a flexible multimodal interface, using voice commands, facial expressions, head movements and joystick as its main inputs. A Quasi-experimental design was applied including a deterministic sample with a questionnaire that enabled to apply the System Usability Scale. The subjects were divided in two independent samples: 46 individuals performing the experiment with an Intelligent Wheelchair in a simulated environment (28 using different commands in a sequential way and 18 with the liberty to choose the command); 12 individuals performing the experiment with a real IW. The main conclusion achieved by this study is that the usability of the Intelligent Wheelchair in a real environment is higher than in the simulated environment. However there were not statistical evidences to affirm that there are differences between the real and simulated wheelchairs in terms of safety and control. Also, most of users considered the multimodal way of driving the wheelchair very practical and satisfactory. Thus, it may be concluded that the multimodal interfaces enables very easy and safe control of the IW both in simulated and real environments.
Resumo:
Resource constraints are becoming a problem as many of the wireless mobile devices have increased generality. Our work tries to address this growing demand on resources and performance, by proposing the dynamic selection of neighbor nodes for cooperative service execution. This selection is in uenced by user's quality of service requirements expressed in his request, tailoring provided service to user's speci c needs. In this paper we improve our proposal's formulation algorithm with the ability to trade o time for the quality of the solution. At any given time, a complete solution for service execution exists, and the quality of that solution is expected to improve overtime.
Resumo:
The current work can be seen as a starting point for the discussion of the problematic on risk acceptance criteria in occupational environments. Some obstacles to the quantitative acceptance criteria formulation and use were analyzed. A look to the long tradition of major hazards accidents was also performed. This work shows that organizations can have several difficulties in acceptance criteria formulation and that the use of pre-defined acceptance criteria in risk assessment methodologies can be inadequate in some cases. It is urgent to define guidelines that can help organizations in the formulation of risk acceptance criteria for occupational environments.
Resumo:
Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes.
Resumo:
In this paper we survey the most relevant results for the prioritybased schedulability analysis of real-time tasks, both for the fixed and dynamic priority assignment schemes. We give emphasis to the worst-case response time analysis in non-preemptive contexts, which is fundamental for the communication schedulability analysis. We define an architecture to support priority-based scheduling of messages at the application process level of a specific fieldbus communication network, the PROFIBUS. The proposed architecture improves the worst-case messages’ response time, overcoming the limitation of the first-come-first-served (FCFS) PROFIBUS queue implementations.
Resumo:
The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to perform services with specific Quality of Service constraints, particularly in dynamic distributed environments where the characteristics of the computational load cannot always be predicted in advance. Our work addresses this problem by allowing resource constrained devices to cooperate with more powerful neighbour nodes, opportunistically taking advantage of global distributed resources and processing power. Rather than assuming that the dynamic configuration of this cooperative service executes until it computes its optimal output, the paper proposes an anytime approach that has the ability to tradeoff deliberation time for the quality of the solution. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves at each iteration, with an overhead that can be considered negligible.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
This paper presents an ongoing project that implements a platform for creating personal learning environments controlled by students, integrating Web 2.0 applications and content management systems, enabling the safe use of content created in Web 2.0 applications, allowing its publication in the infrastructure controlled by the HEI. Using this platform, students can develop their personal learning environment (PLE) integrated with the Learning Management System (LMS) of the HEI, enabling the management of their learning and, simultaneously, creating their e-portfolio with digital content developed for Course Units (CU). All this can be maintained after the student completes his academic studies, since the platform will remain accessible to students even after they leave the HEI and lose access to its infrastructure. The platform will enable the safe use of content created in Web 2.0 applications, allowing its protected publication in the infrastructure controlled by HEI, thus contributing to the adaptation of the L&T paradigm to the Bologna process.
Resumo:
The Smart Grid environment allows the integration of resources of small and medium players through the use of Demand Response programs. Despite the clear advantages for the grid, the integration of consumers must be carefully done. This paper proposes a system which simulates small and medium players. The system is essential to produce tests and studies about the active participation of small and medium players in the Smart Grid environment. When comparing to similar systems, the advantages comprise the capability to deal with three types of loads – virtual, contextual and real. It can have several loads optimization modules and it can run in real time. The use of modules and the dynamic configuration of the player results in a system which can represent different players in an easy and independent way. This paper describes the system and all its capabilities.
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
Due to their detrimental effects on human health, scientific interest in ultrafine particles (UFP), has been increasing but available information is far from comprehensive. Children, who represent one of the most susceptible subpopulation, spend the majority of time in schools and homes. Thus, the aim of this study is to (1) assess indoor levels of particle number concentrations (PNC) in ultrafine and fine (20–1000 nm) range at school and home environments and (2) compare indoor respective dose rates for 3- to 5-yr-old children. Indoor particle number concentrations in range of 20–1000 nm were consecutively measured during 56 d at two preschools (S1 and S2) and three homes (H1–H3) situated in Porto, Portugal. At both preschools different indoor microenvironments, such as classrooms and canteens, were evaluated. The results showed that total mean indoor PNC as determined for all indoor microenvironments were significantly higher at S1 than S2. At homes, indoor levels of PNC with means ranging between 1.09 × 104 and 1.24 × 104 particles/cm3 were 10–70% lower than total indoor means of preschools (1.32 × 104 to 1.84 × 104 particles/cm3). Nevertheless, estimated dose rates of particles were 1.3- to 2.1-fold higher at homes than preschools, mainly due to longer period of time spent at home. Daily activity patterns of 3- to 5-yr-old children significantly influenced overall dose rates of particles. Therefore, future studies focusing on health effects of airborne pollutants always need to account for children’s exposures in different microenvironments such as homes, schools, and transportation modes in order to obtain an accurate representation of children overall exposure.
Resumo:
EMC2 finds solutions for dynamic adaptability in open systems. It provides handling of mixed criticality multicore applications in r eal-time conditions, withscalability and utmost flexibility, full-scale deployment and management of integrated tool chains, through the entire lifecycle.