910 resultados para mutual exclusion
Resumo:
Artigo baseado na comunicação proferida no 7º Congresso SOPCOM: Comunicação Global, Cultura e Tecnologia, realizado na Faculdade de Letras da Universidade do Porto, Porto, Portugal, 15 -17 dezembro de 2011
Resumo:
Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.
Resumo:
Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.
Resumo:
The automatic acquisition of lexical associations from corpora is a crucial issue for Natural Language Processing. A lexical association is a recurrent combination of words that co-occur together more often than expected by chance in a given domain. In fact, lexical associations define linguistic phenomena such as idiomes, collocations or compound words. Due to the fact that the sense of a lexical association is not compositionnal, their identification is fundamental for the realization of analysis and synthesis that take into account all the subtilities of the language. In this report, we introduce a new statistically-based architecture that extracts from naturally occurring texts contiguous and non contiguous. For that purpose, three new concepts have been defined : the positional N-gram models, the Mutual Expectation and the GenLocalMaxs algorithm. Thus, the initial text is fisrtly transformed in a set of positionnal N-grams i.e ordered vectors of simple lexical units. Then, an association measure, the Mutual Expectation, evaluates the degree of cohesion of each positional N-grams based on the identification of local maximum values of Mutual Expectation. Great efforts have also been carried out to evaluate our metodology. For that purpose, we have proposed the normalisation of five well-known association measures and shown that both the Mutual Expectation and the GenLocalMaxs algorithm evidence significant improvements comparing to existent metodologies.
Resumo:
Synchronization is a challenging and important issue for time-sensitive Wireless Sensor Networks (WSN) since it requires a mutual spatiotemporal coordination between the nodes. In that concern, the IEEE 802.15.4/ZigBee protocols embody promising technologies for WSNs, but are still ambiguous on how to efficiently build synchronized multiple-cluster networks, specifically for the case of cluster-tree topologies. In fact, the current IEEE 802.15.4/ZigBee specifications restrict the synchronization to beacon-enabled (by the generation of periodic beacon frames) star networks, while they support multi-hop networking in mesh topologies, but with no synchronization. Even though both specifications mention the possible use of cluster-tree topologies, which combine multi-hop and synchronization features, the description on how to effectively construct such a network topology is missing. This paper tackles this issue by unveiling the ambiguities regarding the use of the cluster-tree topology and proposing a synchronization mechanism based on Time Division Beacon Scheduling (TDBS) to build cluster-tree WSNs. In addition, we propose a methodology for efficiently managing duty-cycles in every cluster, ensuring the fairest use of bandwidth resources. The feasibility of the TDBS mechanism is clearly demonstrated through an experimental test-bed based on our open-source implementation of the IEEE 802.15.4/ZigBee protocols.
Resumo:
The paper will present the central discourse of the knowledge-based society. Already in the 1960s the debate of the industrial society already raised the question whether there can be considered a paradigm shift towards a knowledge-based society. Some prominent authors already foreseen ‘knowledge’ as the main indicator in order to displace ‘labour’ and ‘capital’ as the main driving forces of the capitalistic development. Today on the political level and also in many scientific disciplines the assumption that we are already living in a knowledge-based society seems obvious. Although we still do not have a theory of the knowledge-based society and there still exist a methodological gap about the empirical indicators, the vision of a knowledge-based society determines at least the perception of the Western societies. In a first step the author will pinpoint the assumptions about the knowledge-based society on three levels: on the societal, on the organisational and on the individual level. These assumptions are relied on the following topics: a) The role of the information and communication technologies; b) The dynamic development of globalisation as an ‘evolutionary’ process; c) The increasing importance of knowledge management within organisations; d) The changing role of the state within the economic processes. Not only the differentiation between the levels but also the revision of the assumptions of a knowledge-based society will show that the ‘topics raised in the debates’ cannot be considered as the results of a profound societal paradigm shift. However what seems very impressive is the normative and virtual shift towards a concept of modernity, which strongly focuses on the role of technology as a driving force as well as on the global economic markets, which has to be accepted. Therefore – according to the official debate - the successful adaptation of these processes seems the only way to meet the knowledge-based society. Analysing the societal changes on the three levels, the label ‘knowledge-based society’ can be seen critically. Therefore the main question of Theodor W. Adorno during the 16th Congress of Sociology in 1968 did not loose its actuality. Facing the societal changes he asked whether we are still living in the industrial society or already in a post-industrial state. Thinking about the knowledge-based society according to these two options, this exercise would enrich the whole debate in terms of social inequality, political, economic exclusion processes and at least the power relationship between social groups.
Resumo:
In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.
Resumo:
This paper addresses the use of multidimensional scaling in the evaluation of controller performance. Several nonlinear systems are analyzed based on the closed loop time response under the action of a reference step input signal. Three alternative performance indices, based on the time response, Fourier analysis, and mutual information, are tested. The numerical experiments demonstrate the feasibility of the proposed methodology and motivate its extension for other performance measures and new classes of nonlinearities.
Resumo:
Mestrado em Fisioterapia
Resumo:
This paper presents the Pseudo phase plane (PPP) method for detecting the existence of a nanofilm on the nitroazobenzene-modified glassy carbon electrode (NAB-GC) system. This modified electrode systems and nitroazobenze-nanofilm were prepared by the electrochemical reduction of diazonium salt of NAB at the glassy carbon electrodes (GCE) in nonaqueous media. The IR spectra of the bare glassy carbon electrodes (GCE), the NAB-GC electrode system and the organic NAB film were recorded. The IR data of the bare GC, NAB-GC and NAB film were categorized into five series consisting of FILM1, GC-NAB1, GC1; FILM2, GC-NAB2, GC2; FILM3, GC-NAB3, GC3 and FILM4, GC-NAB4, GC4 respectively. The PPP approach was applied to each group of the data of unmodified and modified electrode systems with nanofilm. The results provided by PPP method show the existence of the NAB film on the modified GC electrode.
Resumo:
This chapter analyzes the signals captured during impacts and vibrations of a mechanical manipulator. Eighteen signals are captured and several metrics are calculated between them, such as the correlation, the mutual information and the entropy. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
OBJECTIVE To compare the effectiveness of two speech therapy interventions, vocal warm-up and breathing training, focusing on teachers’ voice quality.METHODS A single-blind, randomized, parallel clinical trial was conducted. The research included 31 20 to 60-year old teachers from a public school in Salvador, BA, Northeasatern Brazil, with minimum workloads of 20 hours a week, who have or have not reported having vocal alterations. The exclusion criteria were the following: being a smoker, excessive alcohol consumption, receiving additional speech therapy assistance while taking part in the study, being affected by upper respiratory tract infections, professional use of the voice in another activity, neurological disorders, and history of cardiopulmonary pathologies. The subjects were distributed through simple randomization in groups vocal warm-up (n = 14) and breathing training (n = 17). The teachers’ voice quality was subjectively evaluated through the Voice Handicap Index (Índice de Desvantagem Vocal, in the Brazilian version) and computerized voice analysis (average fundamental frequency, jitter, shimmer, noise, and glottal-to-noise excitation ratio) by speech therapists.RESULTS Before the interventions, the groups were similar regarding sociodemographic characteristics, teaching activities, and vocal quality. The variations before and after the intervention in self-assessment and acoustic voice indicators have not significantly differed between the groups. In the comparison between groups before and after the six-week interventions, significant reductions in the Voice Handicap Index of subjects in both groups were observed, as wells as reduced average fundamental frequencies in the vocal warm-up group and increased shimmer in the breathing training group. Subjects from the vocal warm-up group reported speaking more easily and having their voices more improved in a general way as compared to the breathing training group.CONCLUSIONS Both interventions were similar regarding their effects on the teachers’ voice quality. However, each contribution has individually contributed to improve the teachers’ voice quality, especially the vocal warm-up.TRIAL RECORD NCT02102399, “Vocal Warm-up and Respiratory Muscle Training in Teachers”.
Resumo:
No literature data above atmospheric pressure could be found for the viscosity of TOTIVI. As a consequence, the present viscosity results could only be compared upon extrapolation of the vibrating wire data to 0.1 MPa. Independent viscosity measurements were performed, at atmospheric pressure, using an Ubbelohde capillary in order to compare with the vibrating wire results, extrapolated by means of the above mentioned correlation. The two data sets agree within +/- 1%, which is commensurate with the mutual uncertainty of the experimental methods. Comparisons of the literature data obtained at atmospheric pressure with the present extrapolated vibrating-wire viscosity measurements have shown an agreement within +/- 2% for temperatures up to 339 K and within +/- 3.3% for temperatures up to 368 K. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde
Resumo:
In this article, physical layer awareness in access, core, and metro networks is addressed, and a Physical Layer Aware Network Architecture Framework for the Future Internet is presented and discussed, as proposed within the framework of the European ICT Project 4WARD. Current limitations and shortcomings of the Internet architecture are driving research trends at a global scale toward a novel, secure, and flexible architecture. This Future Internet architecture must allow for the co-existence and cooperation of multiple networks on common platforms, through the virtualization of network resources. Possible solutions embrace a full range of technologies, from fiber backbones to wireless access networks. The virtualization of physical networking resources will enhance the possibility of handling different profiles, while providing the impression of mutual isolation. This abstraction strategy implies the use of well elaborated mechanisms in order to deal with channel impairments and requirements, in both wireless (access) and optical (core) environments.