972 resultados para real-effort task
Resumo:
As colleges and universities make increasing global engagement a top institutional priority, many have struggled to manage rising levels of international activity. Council research finds that the challenge lies not in convincing faculty to expend more effort but instead in reducing the level of effort required by faculty who are already interested in promoting international activities. This study provides detailed case studies and toolkits for administrative core competencies for increased global engagement. Chapter 2 (page 39) details strategies to promote faculty-led study abroad programs, which constitute the fastest growing study abroad experience. Chapter 5 (page 111) outlines recommendations to build strategic international partnerships that engage the entire campus.
Resumo:
Este trabalho teve por objetivo elaborar um modelo de programa alternativo capaz de orientar a realização de uma auditoria fiscal na área do imposto de renda pessoa júridica, para empresas comerciais e/ou industriais, tributadas com base no lucro real, à aliquota de 35%. Através de uma pesquisa de campo de natureza exploratória, levantamos dados junto a auditores fiscais, atuantes na área publica e privada. A partir dos dados levantados, dos elementos obtidos na literatura sobre a matéria e da nossa experiência profissional, procedemos à sistematização dos conhecimentos e propusemos o modelo de programa. Posteriormente, submetemos o projeto a teste, narealização de auditorias fiscais, concluindo que a presença desse primeiro e principal papel de trabalho de auditoria era, embora com restrições, útil e necessária à execuçao eficiente das tarefas dos auditores militantes na área, sendo capaz de traçar parâmetros de conduta, padronizar os procedimentos da equipe e colaborar na transmissão dos conhecimentos adquiridos.
Resumo:
The application of ergonomics in product design is essential to its accessibility and usability. The development of manual devices should be based on ergonomic principles. Effort perception analysis is an essential approach to understand the physical and subjective aspects of the interface. The objective of the present study was to analyze the effort perception during a simulated task with different door handles by Portuguese subjects of both genders and different ages. This transversal study agreed with ethical aspects. 180 subjects of both genders pertaining to three age groups have participated. Five door handles with different shapes were evaluated. A subjective numeric rating scale of 5 levels was used to evaluate the effort. For statistical analysis it was applied the Friedman non-parametric test. The results have showed no significant differences of effort perception in door handles "A" and "B"; "A" and "D"; and "D" and "C". Door handle "E" presented the lowest values of all. In general, there's an inverse relationship between the results of biomechanical studies and the effort perception of the same task activity. This shows that door handles design influence directly these two variables and can interfere in the accessibility and usability of these kinds of products.
Resumo:
Antenna arrays are able to provide high and controlled directivity, which are suitable for radiobase stations, radar systems, and point-to-point or satellite links. The optimization of an array design is usually a hard task because of the non-linear characteristic of multiobjective, requiring the application of numerical techniques, such as genetic algorithms. Therefore, in order to optimize the electronic control of the antenna array radiation pattem through genetic algorithms in real codification, it was developed a numerical tool which is able to positioning the array major lobe, reducing the side lobe levels, canceling interference signals in specific directions of arrival, and improving the antenna radiation performance. This was accomplished by using antenna theory concepts and optimization methods, mainly genetic algorithms ones, allowing to develop a numerical tool with creative genes codification and crossover rules, which is one of the most important contribution of this work. The efficiency of the developed genetic algorithm tool is tested and validated in several antenna and propagation applications. 11 was observed that the numerical results attend the specific requirements, showing the developed tool ability and capacity to handle the considered problems, as well as a great perspective for application in future works.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Foi objetivo deste estudo caracterizar a relação entre o nível de aptidão física, desempenho e solicitação metabólica em futebolistas durante situação real de jogo. Seis jogadores de futebol profissional com média de idade de 20,8 ± 2,6 anos (17-25), peso 70,4 ± 7,5kg (63-81,3) e altura 173,3 ± 9,7cm (166-188), foram submetidos a testes de aptidão física em campo e análise cinematográfica durante a partida. Os testes de aptidão física foram realizados em campo, com medições de lactato sanguíneo. A via metabólica alática foi avaliada por meio de cinco corridas na distância de 30m, em velocidade máxima, com pausa passiva de um minuto entre cada corrida. As concentrações de lactato foram medidas no 1º, 3º e 5º minuto após o término das cinco corridas. Para detecção do limiar anaeróbio foram realizadas 3 corridas de 1.200m nas intensidades de 80, 85 e 90% da velocidade máxima para essa distância, com intervalo passivo de 15 minutos entre cada corrida. As dosagens de lactato sanguíneo foram feitas no 1º, 3º e 5º minuto de repouso passivo após cada corrida. Os futebolistas foram submetidos à filmagem individual durante o transcorrer do jogo e as concentrações de lactato foram medidas antes, no intervalo e no final da partida para análise da solicitação energética e metabólica, respectivamente. Os seguintes resultados foram verificados: 1) o limiar anaeróbio em velocidade de corrida, correspondente à concentração de lactato sanguíneo de 4mmol.L_1 foi encontrado aos 268 ± 28m.min_1 ou 16,1 ± 1,6km.h_1; 2) a velocidade média e a concentração de lactato máximo nas corridas de 30m foram de 6,9 ± 0,2m.s_1 e 4,5 ± 1,0mmol.L_1, respectivamente; 3) a distância total percorrida foi de 10.392 ± 849m, sendo 5.446 ± 550m para o primeiro e 4.945 ± 366m para o segundo tempo, respectivamente; 4) os valores médios encontrados nas concentrações de lactato sanguíneo foram de 1,58 ± 0,37; 4,5 ± 0,42 e 3,46 ± 1,54mmol.L_1 antes, no intervalo do primeiro para o segundo tempo e ao final da a,respectivamente; e 5) a distância média total atingida ao final das partidas pelos jogadores de meio-campo (10.910 ± 121m) foi ligeiramente maior que a percorrida pelos atacantes (10.377 ± 224m) e defensores (9.889 ± 102m), mas não significativa. Houve correlação negativa (r =- 0,84; p < 0,05) entre o limiar anaeróbio (268 ± 28m.min_1 ou 16,1 ± 1,6km.h_1) e a concentração de lactato sanguíneo (4,5 ± 0,4 mmol.L_1) no primeiro tempo do jogo. Portanto, os resultados sugerem que a capacidade aeróbia é um determinante importante para suportar a longa duração da partida e recuperar mais rapidamente os futebolistas dos esforços realizados em alta intensidade, com o desenvolvimento de concentrações de lactato sanguíneo menores ao final do primeiro e segundo tempo das partidas.
Resumo:
Managing the great complexity of enterprise system, due to entities numbers, decision and process varieties involved to be controlled results in a very hard task because deals with the integration of its operations and its information systems. Moreover, the enterprises find themselves in a constant changing process, reacting in a dynamic and competitive environment where their business processes are constantly altered. The transformation of business processes into models allows to analyze and redefine them. Through computing tools usage it is possible to minimize the cost and risks of an enterprise integration design. This article claims for the necessity of modeling the processes in order to define more precisely the enterprise business requirements and the adequate usage of the modeling methodologies. Following these patterns, the paper concerns the process modeling relative to the domain of demand forecasting as a practical example. The domain of demand forecasting was built based on a theoretical review. The resulting models considered as reference model are transformed into information systems and have the aim to introduce a generic solution and be start point of better practical forecasting. The proposal is to promote the adequacy of the information system to the real needs of an enterprise in order to enable it to obtain and accompany better results, minimizing design errors, time, money and effort. The enterprise processes modeling are obtained with the usage of CIMOSA language and to the support information system it was used the UML language.
Resumo:
Includes bibliography
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
Context-aware computing is currently considered the most promising approach to overcome information overload and to speed up access to relevant information and services. Context-awareness may be derived from many sources, including user profile and preferences, network information, sensor analysis; usually context-awareness relies on the ability of computing devices to interact with the physical world, i.e. with the natural and artificial objects hosted within the "environment”. Ideally, context-aware applications should not be intrusive and should be able to react according to user’s context, with minimum user effort. Context is an application dependent multidimensional space and the location is an important part of it since the very beginning. Location can be used to guide applications, in providing information or functions that are most appropriate for a specific position. Hence location systems play a crucial role. There are several technologies and systems for computing location to a vary degree of accuracy and tailored for specific space model, i.e. indoors or outdoors, structured spaces or unstructured spaces. The research challenge faced by this thesis is related to pedestrian positioning in heterogeneous environments. Particularly, the focus will be on pedestrian identification, localization, orientation and activity recognition. This research was mainly carried out within the “mobile and ambient systems” workgroup of EPOCH, a 6FP NoE on the application of ICT to Cultural Heritage. Therefore applications in Cultural Heritage sites were the main target of the context-aware services discussed. Cultural Heritage sites are considered significant test-beds in Context-aware computing for many reasons. For example building a smart environment in museums or in protected sites is a challenging task, because localization and tracking are usually based on technologies that are difficult to hide or harmonize within the environment. Therefore it is expected that the experience made with this research may be useful also in domains other than Cultural Heritage. This work presents three different approaches to the pedestrian identification, positioning and tracking: Pedestrian navigation by means of a wearable inertial sensing platform assisted by the vision based tracking system for initial settings an real-time calibration; Pedestrian navigation by means of a wearable inertial sensing platform augmented with GPS measurements; Pedestrian identification and tracking, combining the vision based tracking system with WiFi localization. The proposed localization systems have been mainly used to enhance Cultural Heritage applications in providing information and services depending on the user’s actual context, in particular depending on the user’s location.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.