972 resultados para Interaction network
Resumo:
Release of chloroethene compounds into the environment often results in groundwater contamination, which puts people at risk of exposure by drinking contaminated water. cDCE (cis-1,2-dichloroethene) accumulation on subsurface environments is a common environmental problem due to stagnation and partial degradation of other precursor chloroethene species. Polaromonas sp. strain JS666 apparently requires no exotic growth factors to be used as a bioaugmentation agent for aerobic cDCE degradation. Although being the only suitable microorganism found capable of such, further studies are needed for improving the intrinsic bioremediation rates and fully comprehend the metabolic processes involved. In order to do so, a metabolic model, iJS666, was reconstructed from genome annotation and available bibliographic data. FVA (Flux Variability Analysis) and FBA (Flux Balance Analysis) techniques were used to satisfactory validate the predictive capabilities of the iJS666 model. The iJS666 model was able to predict biomass growth for different previously tested conditions, allowed to design key experiments which should be done for further model improvement and, also, produced viable predictions for the use of biostimulant metabolites in the cDCE biodegradation.
Resumo:
IntroductionPurpureocillium lilacinum is emerging as a causal agent of hyalohyphomycosis that is refractory to antifungal drugs; however, the pathogenic mechanisms underlying P. lilacinum infection are not understood. In this study, we investigated the interaction of P. lilacinum conidia with human macrophages and dendritic cells in vitro.MethodsSpores of a P. lilacinum clinical isolate were obtained by chill-heat shock. Mononuclear cells were isolated from eight healthy individuals. Monocytes were separated by cold aggregation and differentiated into macrophages by incubation for 7 to 10 days at 37°C or into dendritic cells by the addition of the cytokines human granulocyte-macrophage colony stimulating factor and interleukin-4. Conidial suspension was added to the human cells at 1:1, 2:1, and 5:1 (conidia:cells) ratios for 1h, 6h, and 24h, and the infection was evaluated by Giemsa staining and light microscopy.ResultsAfter 1h interaction, P. lilacinum conidia were internalized by human cells and after 6h contact, some conidia became inflated. After 24h interaction, the conidia produced germ tubes and hyphae, leading to the disruption of macrophage and dendritic cell membranes. The infection rate analyzed after 6h incubation of P. lilacinumconidia with cells at 2:1 and 1:1 ratios was 76.5% and 25.5%, respectively, for macrophages and 54.3% and 19.5%, respectively, for cultured dendritic cells.ConclusionsP. lilacinum conidia are capable of infecting and destroying both macrophages and dendritic cells, clearly demonstrating the ability of this pathogenic fungus to invade human phagocytic cells.
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
The EM3E Master is an Education Programme supported by the European Commission, the European Membrane Society (EMS), the European Membrane House (EMH), and a large international network of industrial companies, research centres and universities (http://www.em3e.eu)
Resumo:
Outrora dominado por ameaças provenientes de Estados-nação, o cenário global actual, dominado por uma rápida mudança de poderes que nos apresenta uma interacção complexa entre múltiplos actores, onde inimigos desconhecidos, anteriormente bem identificados, é actualmente controlado por grupos terroristas bem preparados e bem organizados. Hezbollah é reconhecido como um dos grupos terroristas mais capazes, com uma extensa rede fora do Líbano dedicada a tráfico de droga, armas e seres humanos, tal como o branqueamento de capitais para financiar o terrorismo, representando um grande foco de instabilidade à segurança. Como instrumento de Estado, os serviços de informações detêm a capacidade de estar na linha da frente na prevenção e combate ao terrorismo. Todavia, para compreender este fenómeno é necessário analisar os actores desta ameaça. À luz desta conjuntura, esta dissertação está dividida em três capítulos principais que visam responder às seguintes questões fundamentais: O que é o terrorismo? Como opera um grupo terrorista transnacional? Será que os serviços de informações têm as ferramentas necessárias para prevenir e combater estas ameaças?
Resumo:
The frequency of electric organ discharges (EOD) of a gymnotiform fish of "pulse" frequency (40-100 Hz) from South America - Ramphicthys rostratuswas studied. The animals were settled in pairs in a aquarium and thus observed: variation in EOD frequency had at least two components: one more positively correlated with temperature, another less positively correlated due to social interaction.
Resumo:
The present study investigates peer to peer oral interaction in two task based language teaching classrooms, one of which was a self-declared cohesive group, and the other a self- declared less cohesive group, both at B1 level. It studies how learners talk cohesion into being and considers how this talk leads to learning opportunities in these groups. The study was classroom-based and was carried out over the period of an academic year. Research was conducted in the classrooms and the tasks were part of regular class work. The research was framed within a sociocognitive perspective of second language learning and data came from a number of sources, namely questionnaires, interviews and audio recorded talk of dyads, triads and groups of four students completing a total of eight oral tasks. These audio recordings were transcribed and analysed qualitatively for interactions which encouraged a positive social dimension and behaviours which led to learning opportunities, using conversation analysis. In addition, recordings were analysed quantitatively for learning opportunities and quantity and quality of language produced. Results show that learners in both classes exhibited multiple behaviours in interaction which could promote a positive social dimension, although behaviours which could discourage positive affect amongst group members were also found. Analysis of interactions also revealed the many ways in which learners in both the cohesive and less cohesive class created learning opportunities. Further qualitative analysis of these interactions showed that a number of factors including how learners approach a task, the decisions they make at zones of interactional transition and the affective relationship between participants influence the amount of learning opportunities created, as well as the quality and quantity of language produced. The main conclusion of the study is that it is not the cohesive nature of the group as a whole but the nature of the relationship between the individual members of the small group completing the task which influences the effectiveness of oral interaction for learning.This study contributes to our understanding of the way in which learners individualise the learning space and highlights the situated nature of language learning. It shows how individuals interact with each other and the task, and how talk in interaction changes moment-by-moment as learners react to the ‘here and now’ of the classroom environment.
Resumo:
The geographical distribution of the African Tilapia Oreochromis mossambicusin Suriname is restricted to a narrow strip of land along the Atlantic coast. Within the coastal plain, O. mossambicusoccurs in brackish lagoons, oligohaline canals, and shell-sand pit lakes. Physico-chemical characteristics and phytoplankton composition of representative Tilapia water bodies are described. Blue-green algae and fine flocculent detritus are dominant food items in the diet of the Tilapia, while Rotifera and microcrustacea are also important in the diet of larvae and juveniles. Intraspecific diet overlap among ontogenetic stages of the Tilapia did not differ significantly from 1, which means that these diets showed complete overlap. Interspecific diet overlap between the Tilapia and the indigenous armoured catfish Hoplosternum littoralewere moderate or low. The results are discussed in relation to recent developments in the Surinamese fisheries and aquaculture sector.
Resumo:
The aim of this study is evaluating the interaction between several base pen grade asphalt binders (35/50, 50/70, 70/100, 160/220) and two different plastic wastes (EVA and HDPE), for a set of new polymer modified binders produced with different amounts of both plastic wastes. After analysing the results obtained for the several polymer modified binders evaluated in this study, including a commercial modified binder, it can be concluded that the new PMBs produced with the base bitumen 70/100 and 5% of each plastic waste (HDPE or EVA) results in binders with very good performance, similar to that of the commercial modified binder.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.
Resumo:
Nowadays, many P2P applications proliferate in the Internet. The attractiveness of many of these systems relies on the collaborative approach used to exchange large resources without the dependence and associated constraints of centralized approaches where a single server is responsible to handle all the requests from the clients. As consequence, some P2P systems are also interesting and cost-effective approaches to be adopted by content-providers and other Internet players. However, there are several coexistence problems between P2P applications and In- ternet Service Providers (ISPs) due to the unforeseeable behavior of P2P traffic aggregates in ISP infrastructures. In this context, this work proposes a collaborative P2P/ISP system able to underpin the development of novel Traffic Engi- neering (TE) mechanisms contributing for a better coexistence between P2P applications and ISPs. Using the devised system, two TE methods are described being able to estimate and control the impact of P2P traffic aggregates on the ISP network links. One of the TE methods allows that ISP administrators are able to foresee the expected impact that a given P2P swarm will have in the underlying network infrastructure. The other TE method enables the definition of ISP friendly P2P topologies, where specific network links are protected from P2P traffic. As result, the proposed system and associated mechanisms will contribute for improved ISP resource management tasks and to foster the deployment of innovative ISP-friendly systems.
Resumo:
This paper presents an automated optimization framework able to provide network administrators with resilient routing configurations for link-state protocols, such as OSPF or IS-IS. In order to deal with the formulated NP-hard optimization problems, the devised framework is underpinned by the use of computational in- telligence optimization engines, such as Multi-objective Evolutionary Algorithms (MOEAs). With the objective of demonstrating the framework capabilities, two il- lustrative Traffic Engineering methods are described, allowing to attain routing con- figurations robust to changes in the traffic demands and maintaining the network stable even in the presence of link failure events. The presented illustrative results clearly corroborate the usefulness of the proposed automated framework along with the devised optimization methods.
Resumo:
PhD Thesis in Bioengineering