966 resultados para Communication means
Resumo:
The feasibility of using AlGaInAs lasers for high-speed modulation at high temperatures was evaluated and compared with performance of GaInAsP devices. Both drift-diffusion and rate equation simulation were involved so that the temperature dependence of material parameters was found in terms of overall dynamic performance. Differential gain was estimated by means of drift-diffusion simulations.
Resumo:
This dissertation examines the role of communications technology in social change. It examines secondary data on contemporary China arguing that many interpretations of events in China are unsuitable at best and at worst conceptually damages our understanding of social change in China. This is especially the case in media studies under the ‘democratic framework’. It proposes that there is an alternative framework in studying the media and social change. This alternative conceptual framework is termed a zone of interpretative development offering a means by which to discuss events that take place in a mediated environment. Taking a theoretical foundation using the philosophy of Mikhail Bakhtin this dissertation develops a platform with which to understand communication technology from an anthropological perspective. Three media events from contemporary China are examined. The first examines the Democracy Wall event and the implications of using a public sphere framework. The second case examines the phenomenon of the Grass Mud Horse, a symbol that has gained popular purchase as a humorous expression of political dissatisfaction and develops the problems seen in the first case but with some solutions. Using a modification of Lev Vygotskiĭ’s zone of proximal development this symbol is understood as an expression of the collective recognition of a shared experience. In the second example from the popular TV talent show contests in China further expressions of collective experience are introduced. With the evidence from these media events in contemporary China this dissertation proposes that we can understand certain modes of communication as occurring in a zone of interpretative development. This proposed anthropological feature of social change via communication and technology can fruitfully describe meaning-formation in society via the expression and recognition of shared experiences.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Context: Despite the fact that most deaths occur in hospital, problems remain with how patients and families experience care at the end of life when a death occurs in a hospital. Objectives: (1) assess family member satisfaction with information sharing and communication, and (2) examine how satisfaction with information sharing and communication is associated with patient factors. Methods: Using a cross-sectional survey, data were collected from family members of adult patients who died in an acute care organization. Correlation and factor analysis were conducted, and internal consistency assessed using Cronbach's alpha. Linear regression was performed to determine the relationship among patient variables and satisfaction on the Information Sharing and Communication (ISC) scale. Results: There were 529 questionnaires available for analysis. Following correlation analysis and the dropping of redundant and conceptually irrelevant items, seven items remained for factor analysis. One factor was identified, described as information sharing and communication, that explained 76.3% of the variance. The questionnaire demonstrated good content and reliability (Cronbach's alpha 0.96). Overall, family members were satisfied with information sharing and communication (mean total satisfaction score 3.9, SD 1.1). The ISC total score was significantly associated with patient gender, the number of days in hospital before death, and the hospital program where the patient died. Conclusions: The ISC scale demonstrated good content validity and reliability. The ISC scale offers acute care organizations a means to assess the quality of information sharing and communication that transpires in care at the end of life. © Copyright 2013, Mary Ann Liebert, Inc.
Resumo:
A maioria das funções celulares, incluindo expressão de genes, crescimento e proliferação celulares, metabolismo, morfologia, motilidade, comunicação intercelular e apoptose, é regulada por interações proteína-proteína (IPP). A célula responde a uma variedade de estímulos, como tal a expressão de proteínas é um processo dinâmico e os complexos formados são constituídos transitoriamente mudando de acordo com o seu ciclo funcional, adicionalmente, muitas proteínas são expressas de uma forma dependente do tipo de célula. Em qualquer instante a célula pode conter cerca de centenas de milhares de IPPs binárias, e encontrar os companheiros de interação de uma proteína é um meio de inferir a sua função. Alterações em redes de IPP podem também fornecer informações acerca de mecanismos de doença. O método de identificação binário mais frequentemente usado é o sistema Dois Hibrido de Levedura, adaptado para rastreio em larga escala. Esta metodologia foi aqui usada para identificar os interactomas específicos de isoforma da Proteína Fosfatase 1 (PP1), em cérebro humano. A PP1 é uma proteína fosfatase de Ser/Thr envolvida numa grande variedade de vias e eventos celulares. É uma proteína conservada codificada por três genes, que originam as isoformas α, β, e γ, com a última a originar γ1 e γ2 por splicing alternativo. As diferentes isoformas da PP1 são reguladas pelos companheiros de interação – proteínas que interagem com a PP1 (PIPs). A natureza modular dos complexos da PP1, bem como a sua associação combinacional, gera um largo reportório de complexos reguladores e papéis em circuitos de sinalização celular. Os interactomas da PP1 específicos de isofoma, em cérebro, foram aqui descritos, com um total de 263 interações identificadas e integradas com os dados recolhidos de várias bases de dados de IPPs. Adicionalmente, duas PIPs foram selecionadas para uma caracterização mais aprofundada da interação: Taperina e Sinfilina-1A. A Taperina é uma proteína ainda pouco descrita, descoberta recentemente como sendo uma PIP. A sua interação com as diferentes isoformas da PP1 e localização celulares foram analisadas. Foi descoberto que a Taperina é clivada e que está presente no citoplasma, membrana e núcleo e que aumenta os níveis de PP1, em células HeLa. Na membrana ela co-localiza com a PP1 e a actina e uma forma mutada da Taperina, no motivo de ligação à PP1, está enriquecida no núcleo, juntamente com a actina. Mais, foi descoberto que a Taperina é expressa em testículo e localiza-se na região acrossómica da cabeça do espermatozoide, uma estrutura onde a PP1 e a actina estão também presentes. A Sinfilina-1A, uma isoforma da Sinfilina-1, é uma proteína com tendência para agregar e tóxica, envolvida na doença de Parkinson. Foi mostrado que a Sinfilina-1A liga às isoformas da PP1, por co-transformação em levedura, e que mutação do seu motivo de ligação à PP1 diminuiu significativamente a interação, num ensaio de overlay. Quando sobre-expressa em células Cos-7, a Sinfilina-1A formou corpos de inclusão onde a PP1 estava presente, no entanto a forma mutada da Sinfilina-1A também foi capaz de agregar, indicando que a formação de inclusões não foi dependente de ligação à PP1. Este trabalho dá uma nova perspetiva dos interactomas da PP1, incluindo a identificação de dezenas de companheiros de ligação específicos de isoforma, e enfatiza a importância das PIPs, não apenas na compreensão das funções celulares da PP1 mas também, como alvos de intervenção terapêutica.
Resumo:
This work addresses the joint compensation of IQimbalances and carrier phase synchronization errors of zero- IF receivers. The compensation scheme based on blind-source separation which provides simple yet potent means to jointly compensate for these errors independent of modulation format and constellation size used. The low-complexity of the algorithm makes it a suitable option for real-time deployment as well as practical for integration into monolithic receiver designs.
Resumo:
We live in a changing world. At an impressive speed, every day new technological resources appear. We increasingly use the Internet to obtain and share information, and new online communication tools are emerging. Each of them encompasses new potential and creates new audiences. In recent years, we witnessed the emergence of Facebook, Twitter, YouTube and other media platforms. They have provided us with an even greater interactivity between sender and receiver, as well as generated a new sense of community. At the same time we also see the availability of content like it never happened before. We are increasingly sharing texts, videos, photos, etc. This poster intends to explore the potential of using these new online communication tools in the cultural sphere to create new audiences, to develop of a new kind of community, to provide information as well as different ways of building organizations’ memory. The transience of performing arts is accompanied by the need to counter that transience by means of documentation. This desire to ‘save’ events reaches its expression with the information archive of the different production moments as well as the opportunity to record the event and present it through, for instance, digital platforms. In this poster we intend to answer the following questions: which online communication tools are being used to engage audiences in the cultural sphere (specifically between theater companies in Lisbon)? Is there a new relationship with the public? Are online communication tools creating a new kind of community? What changes are these tools introducing in the creative process? In what way the availability of content and its archive contribute to the organization memory? Among several references, we will approach the two-way communication model that James E. Grunig & Todd T. Hunt (1984) already presented and the concept of mass self-communication of Manuel Castells (2010). Castells also tells us that we have moved from traditional media to a system of communication networks. For Scott Kirsner (2010), we have entered an era of digital creativity, where artists have the tools to do what they imagined and the public no longer wants to just consume cultural goods, but instead to have a voice and participate. The creativity process is now depending on the public choice as they wander through the screen. It is the receiver who owns an object which can be exchanged. Virtual reality has encouraged the receiver to abandon its position of passive observer and to become a participant agent, which implies a challenge to organizations: inventing new forms of interfaces. Therefore, we intend to find new and effective online tools that can be used by cultural organizations; the best way to manage them; to show how organizations can create a community with the public and how the availability of online content and its archive can contribute to the organizations’ memory.
Resumo:
L’objectif général de cette recherche doctorale est l’étude des déterminants de l’intégration pédagogique des technologies de l’information et de la communication (TIC) par les professeurs à l’Université de Ouagadougou (UO). Cela nous a conduit à étudier respectivement les compétences technologiques des professeurs, les facteurs de résistance contraignant l’intégration pédagogique des TIC par ces professeurs, l’acceptation et les usages spécifiques des TIC par les professeurs. Ce travail s’est bâti autour des concepts théoriques sur les usages éducatifs des TIC, les compétences technopédagogiques, les facteurs de résistance, l’acceptation des TIC et l’intégration pédagogique des TIC. Ces concepts se sont inscrits dans les cadres d’analyses des modèles d’intégration des TIC par les professeurs et des modèles d’acceptation et d’utilisation d’une nouvelle technologie. La stratégie d’analyse des données s’est construite autour des approches descriptives et analytiques notamment au moyen de l’utilisation de la psychométrie et/ou de l’économétrie des modèles à variables dépendantes limitées. Utilisant la recherche quantitative, le recrutement de 82 professeurs par avis de consentement à participer, a permis de collecter les données sur la base de questionnaires dont la majeure partie est bâtie autour de questions à échelle de Likert. L’étude des compétences technologiques des professeurs a permis d’une part, de dresser un portrait des usages des TIC par les professeurs. En effet, les usages les plus répandus des TIC dans cette université sont les logiciels de bureautique, les logiciels de messagerie électronique et de navigation dans Internet. Elle a aussi permis de faire un portrait des compétences technologiques des professeurs. Ceux-ci utilisent à la fois plusieurs logiciels et reconnaissent l’importance des TIC pour leurs tâches pédagogiques et de recherche même si leur degré de maîtrise perçue sur certaines des applications télématiques reste à des niveaux très bas. Par rapport à certaines compétences comme celles destinées à exploiter les TIC dans des situations de communication et de collaboration et celles destinée à rechercher et à traiter des informations à l’aide des TIC, les niveaux de maîtrise par les professeurs de ces compétences ont été très élevés. Les professeurs ont eu des niveaux de maîtrise très faibles sur les compétences destinées à créer des situations d’apprentissage à l’aide des TIC et sur celles destinées à développer et à diffuser des ressources d’apprentissage à l’aide des TIC malgré la grande importance que ceux-ci ont accordée à ces compétences avancées essentielles pour une intégration efficace et efficiente des TIC à leurs pratiques pédagogiques. L’étude des facteurs de résistance a permis d’ériger une typologie de ces facteurs. Ces facteurs vont des contraintes matérielles et infrastructurelles à celles liées aux compétences informatiques et à des contraintes liées à la motivation et à l’engagement personnel des professeurs, facteurs pouvant susciter des comportements de refus de la technologie. Ces facteurs sont entre autres, la compatibilité des TIC d’avec les tâches pédagogiques et de recherche des professeurs, l’utilité perçue des TIC pour les activités pédagogiques et de recherche, les facilités d’utilisation des TIC et la motivation ou l’engagement personnel des professeurs aux usages des TIC. Il y a aussi les coûts engendrés par l’accès aux TIC et le manque de soutien et d’assistance technique au plan institutionnel qui se sont révelés enfreindre le développement de ces usages parmi les professeurs. Les estimations des déterminants de l’acceptation et des usages éducatifs des TIC par les professeurs ont montré que c’est surtout « l’intention comportementale » d’aller aux TIC des professeurs, « l’expérience d’Internet » qui affectent positivement les usages éducatifs des TIC. Les « conditions de facilitation » qui représentent non seulement la qualité de l’infrastructure technologique, mais aussi l’existence d’un soutien institutionnel aux usages des TIC, ont affecté négativement ces usages. Des éléments de recommandation issus de ce travail s’orientent vers la formation des professeurs sur des compétences précises identifiées, l’amélioration de la qualité de l’infrastructure technologique existante, la création d’un logithèque, la mise en œuvre d’incitations institutionnelles adéquates telles que l’assistance technique régulière aux professeurs, l’allègement des volumes horaires statutaires des professeurs novateurs, la reconnaissance des efforts déjà réalisés par ces novateurs en matière d’usages éducatifs des TIC dans leur institution.
Resumo:
Jean-Paul II a été favorable à une bonne utilisation des moyens de communication sociale pour renforcer les activités missionnaires de l'Église catholique dans un monde davantage sécularisé. Plusieurs autres auteurs qui seront mentionnés dans ce mémoire célèbrent ce rapport positif que le pape entretient avec les médias et les professionnels des médias. Toutefois une relecture des textes de Jean-Paul II permet de conclure que ce rapport aux médias prend en compte les problèmes associés aux effets négatifs des médias. Certes, son usage et sa compréhension des médias se trouvaient motivés largement par les avantages certains qu'ils offrent, dont il se servira avec habileté, mais aussi par les effets débilitants sur l’annonce de l'Évangile dans le monde actuel qu'ils provoquent. Ambivalent, ce pape réussira tout de même à tirer son épingle du jeu et tentera par tous les moyens de convaincre les Catholiques de l'importance des médias, toutes les formes de médias. Dans le but d'éclaircir ce rapport ambivalent, le mémoire formule deux questions sur lesquelles les analyses sont centrées : 1. Quels sont les enjeux problématiques des médias sous-entendus dans les réflexions de Jean-Paul II en matière de communication sociale ? 2. Quelles approches a-t-il utilisées en réponse à ces enjeux ? En définitive, ces questions permettent, du moins nous semble-t-il, de saisir des aspects fondamentaux concernant les apports de Jean-Paul II en communication sociale.
Resumo:
El derecho internacional fue concebido como un derecho interestatal. Sin embargo, como consecuencia del desarrollo progresivo del derecho, nuevos actores y nuevos sujetos han ido surgiendo. El individuo es uno de ellos bajo diferentes perspectivas, bajo la perspectiva penal al asumir la responsabilidad de sus actos frente a los diferentes tribunales ad hoc y, ahora ante la Corte Penal Internacional. También se ha desarrollado la figura bajo la perspectiva de los derechos humanos. Este artículo analiza las formas como las políticas estatales relativas al derecho internacional se presentan a los individuos, a las personas jurídicas y a los demás actores.
Resumo:
The stratospheric role in the European winter surface climate response to El Niño–Southern Oscillation sea surface temperature forcing is investigated using an intermediate general circulation model with a well-resolved stratosphere. Under El Niño conditions, both the modeled tropospheric and stratospheric mean-state circulation changes correspond well to the observed “canonical” responses of a late winter negative North Atlantic Oscillation and a strongly weakened polar vortex, respectively. The variability of the polar vortex is modulated by an increase in frequency of stratospheric sudden warming events throughout all winter months. The potential role of this stratospheric response in the tropical Pacific–European teleconnection is investigated by sensitivity experiments in which the mean state and variability of the stratosphere are degraded. As a result, the observed stratospheric response to El Niño is suppressed and the mean sea level pressure response fails to resemble the temporal and spatial evolution of the observations. The results suggest that the stratosphere plays an active role in the European response to El Niño. A saturation mechanism whereby for the strongest El Niño events tropospheric forcing dominates the European response is suggested. This is examined by means of a sensitivity test and it is shown that under large El Niño forcing the European response is insensitive to stratospheric representation.
Resumo:
Improved udder health requires consistent application of appropriate management practices by those involved in managing dairy herds and the milking process. Designing effective communication requires that we understand why dairy herd managers behave in the way they do and also how the means of communication can be used both to inform and to influence. Social sciences- ranging from economics to anthropology - have been used to shed light on the behaviour of those who manage farm animals. Communication science tells us that influencing behaviour is not simply a question of „getting the message across‟ but of addressing the complex of factors that influence an individual‟s behavioural decisions. A review of recent studies in the animal health literature shows that different social science frameworks and methodologies offer complementary insights into livestock managers‟ behaviour but that the diversity of conceptual and methodological frameworks presents a challenge for animal health practitioners and policy makers who seek to make sense of the findings – and for researchers looking for helpful starting points. Data from a recent study in England illustrate the potential of „home-made‟ conceptual frameworks to help unravel the complexity of farmer behaviour. At the same time, though, the data indicate the difficulties facing those designing communication strategies in a context where farmers believe strongly that they are already doing all they can reasonably be expected to do to minimise animal health risks.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
This paper addresses the movement towards criminalization as a tool for the regulation of work-related deaths in the United Kingdom and elsewhere in the last 20 years. This can be seen as reflecting dissatisfaction with the relevant law, although it is best understood in symbolic terms as a response to a disjunction between the instrumental nature and communicative aspirations of regulatory law. This paper uses empirical data gathered from interviews with members of the public to explore the role that such an offence might play. The findings demonstrate that the failures of regulatory law give rise to a desire for criminalization as a means of framing work-related safety events in normative terms.