979 resultados para Distributed knowledge
Resumo:
On 19 and 20 October 2006, the Research Centre on Enterprise and Work Organisation (IET) organised the first international conference on “Foresight Studies on Work in the Knowledge Society”. It took place at the auditorium of the new Library of FCT-UNL and had the support of the research project “CodeWork@VO” (financed by FCT-MCTES and co-ordinated by INESC, Porto). The conference related to the European research project “Work Organisation and Restructuring in the Knowledge Society” (WORKS), which is financed by the European Commission. The main objective of the conference was to analyse and discuss research findings on the trends of work structures in the knowledge society, and to debate on new work organisation models and new forms of work supported by ICT.
Resumo:
During the recent years human society evolved from the “industrial society age” and transitioned into the “knowledge society age”. This means that knowledge media support migrated from “pen and paper” to computer-based Information Systems. Due to this fact Ergonomics has assumed an increasing importance, as a science/technology that deals with the problem of adapting the work to the man, namely in terms of Usability. This paper presents some relevant Ergonomics, Usability and User-centred design concepts regarding Information Systems.
Resumo:
Due to the growing complexity and adaptability requirements of real-time embedded systems, which often exhibit unrestricted inter-dependencies among supported services and user-imposed quality constraints, it is increasingly difficult to optimise the level of service of a dynamic task set within an useful and bounded time. This is even more difficult when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand. This paper proposes an iterative refinement approach for a service’s QoS configuration taking into account services’ inter-dependencies and quality constraints, and trading off the achieved solution’s quality for the cost of computation. Extensive simulations demonstrate that the proposed anytime algorithm is able to quickly find a good initial solution and effectively optimises the rate at which the quality of the current solution improves as the algorithm is given more time to run. The added benefits of the proposed approach clearly surpass its reducedoverhead.
Resumo:
OBJECTIVE To evaluate the level of HIV/AIDS knowledge among men who have sex with men in Brazil using the latent trait model estimated by Item Response Theory. METHODS Multicenter, cross-sectional study, carried out in ten Brazilian cities between 2008 and 2009. Adult men who have sex with men were recruited (n = 3,746) through Respondent Driven Sampling. HIV/AIDS knowledge was ascertained through ten statements by face-to-face interview and latent scores were obtained through two-parameter logistic modeling (difficulty and discrimination) using Item Response Theory. Differential item functioning was used to examine each item characteristic curve by age and schooling. RESULTS Overall, the HIV/AIDS knowledge scores using Item Response Theory did not exceed 6.0 (scale 0-10), with mean and median values of 5.0 (SD = 0.9) and 5.3, respectively, with 40.7% of the sample with knowledge levels below the average. Some beliefs still exist in this population regarding the transmission of the virus by insect bites, by using public restrooms, and by sharing utensils during meals. With regard to the difficulty and discrimination parameters, eight items were located below the mean of the scale and were considered very easy, and four items presented very low discrimination parameter (< 0.34). The absence of difficult items contributed to the inaccuracy of the measurement of knowledge among those with median level and above. CONCLUSIONS Item Response Theory analysis, which focuses on the individual properties of each item, allows measures to be obtained that do not vary or depend on the questionnaire, which provides better ascertainment and accuracy of knowledge scores. Valid and reliable scales are essential for monitoring HIV/AIDS knowledge among the men who have sex with men population over time and in different geographic regions, and this psychometric model brings this advantage.
Resumo:
OBJECTIVE To analyze Brazilian literature on body image and the theoretical and methodological advances that have been made. METHODS A detailed review was undertaken of the Brazilian literature on body image, selecting published articles, dissertations and theses from the SciELO, SCOPUS, LILACS and PubMed databases and the CAPES thesis database. Google Scholar was also used. There was no start date for the search, which used the following search terms: “body image” AND “Brazil” AND “scale(s)”; “body image” AND “Brazil” AND “questionnaire(s)”; “body image” AND “Brazil” AND “instrument(s)”; “body image” limited to Brazil and “body image”. RESULTS The majority of measures available were intended to be used in college students, with half of them evaluating satisfaction/dissatisfaction with the body. Females and adolescents of both sexes were the most studied population. There has been a significant increase in the number of available instruments. Nevertheless, numerous published studies have used non-validated instruments, with much confusion in the use of the appropriate terms (e.g., perception, dissatisfaction, distortion). CONCLUSIONS Much more is needed to understand body image within the Brazilian population, especially in terms of evaluating different age groups and diversifying the components/dimensions assessed. However, interest in this theme is increasing, and important steps have been taken in a short space of time.
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.
Resumo:
Dynamical systems theory in this work is used as a theoretical language and tool to design a distributed control architecture for a team of three robots that must transport a large object and simultaneously avoid collisions with either static or dynamic obstacles. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constraints are modeled as attractors (i.e. asymptotic stable states) of the behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotical stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
In this paper dynamical systems theory is used as a theoretical language and tool to design a distributed control architecture for a team of two robots that must transport a large object and simultaneously avoid collisions with obstacles (either static or dynamic). This work extends the previous work with two robots (see [1] and [5]). However here we demonstrate that it’s possible to simplify the architecture presented in [1] and [5] and reach an equally stable global behavior. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constrains are modeled as attractors (i.e. asymptotic stable states) of a behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotic stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
Dynamical systems theory is used as a theoretical language and tool to design a distributed control architecture for teams of mobile robots, that must transport a large object and simultaneously avoid collisions with (either static or dynamic) obstacles. Here we demonstrate in simulations and implementations in real robots that it is possible to simplify the architectures presented in previous work and to extend the approach to teams of n robots. The robots have no prior knowledge of the environment. The motion of each robot is controlled by a time series of asymptotical stable states. The attractor dynamics permits the integration of information from various sources in a graded manner. As a result, the robots show a strikingly smooth an stable team behaviour.
Resumo:
The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to perform services with specific Quality of Service constraints, particularly in dynamic distributed environments where the characteristics of the computational load cannot always be predicted in advance. Our work addresses this problem by allowing resource constrained devices to cooperate with more powerful neighbour nodes, opportunistically taking advantage of global distributed resources and processing power. Rather than assuming that the dynamic configuration of this cooperative service executes until it computes its optimal output, the paper proposes an anytime approach that has the ability to tradeoff deliberation time for the quality of the solution. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves at each iteration, with an overhead that can be considered negligible.
Resumo:
We propose a low complexity technique to generate amplitude correlated time-series with Nakagami-m distribution and phase correlated Gaussian-distributed time-series, which is useful for the simulation of ionospheric scintillation effects in GNSS signals. To generate a complex scintillation process, the technique requires solely the knowledge of parameters Sa (scintillation index) and σφ (phase standard deviation) besides the definition of models for the amplitude and phase power spectra. The concatenation of two nonlinear memoryless transformations is used to produce a Nakagami-distributed amplitude signal from a Gaussian autoregressive process.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We propose a low complexity technique to generate amplitude correlated time-series with Nakagami-m distribution and phase correlated Gaussian-distributed time-series, which is useful in the simulation of ionospheric scintillation effects during the transmission of GNSS signals. The method requires only the knowledge of parameters S4 (scintillation index) and σΦ (phase standard deviation) besides the definition of models for the amplitude and phase power spectra. The Zhang algorithm is used to produce Nakagami-distributed signals from a set of Gaussian autoregressive processes.
Resumo:
It is generally agreed that major changes in work are taking place in the organisation of work as corporate structures are transformed in the context of economic globalisation and rapid technological change. But how can these changes be understood? And what are the impacts on social institutions and on workers and their families? The WORKS project brought together 17 research institutes in 13 European countries to investigate these important issues through a comprehensive four year research programme.
Resumo:
Global restructuring processes have not only strong implications for European working and living realities, but also have specific outcomes with regard to gender relations. The following contribution analyses in which way global restructuring shapes current gender relations in order to identify important trends and developments for future gender (in)equalities at the workplace. On the basis of a large qualitative study on global restructuring and impacts on different occupational groups it argues that occupational belonging in line with skill and qualification levels are crucial factors to assess the further development of gender relations at work. Whereas global restructuring in knowledge-based occupations may provide new opportunities for female employees, current restructuring is going to deteriorate female labour participation in service occupations. In contrast, manufacturing occupations can be characterised by persistent gender relations, which do not change in spite of major restructuring processes at the work place. Taking the institutional perspective into account, it seems to be crucial to integrate the occupational perspective in order to apply adequate policy regulations to prevent the reinforcement of gender related working patterns in the near future.