905 resultados para , Channels or communication
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
At head of title, v.1-44: The Institution of Gas Engineers; v.45-55: The Gas Research Board
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. ^ In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. ^ These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. ^ Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. ^ Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. ^ The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. ^ Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions. ^
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions.
Resumo:
The maned wolf (Chrysocyon brachyurus Illiger 1815) is the biggest canid in South America and it is considered a “near threatened” species by IUCN. Because of its nocturnal, territorial and solitary habits, there are still many understudied aspects of their behavior in natural environments, including acoustic communication. In its vocal repertoire, the wolf presents a longdistance call named “roar-bark” which, according to literature, functions for spacing maintenance between individuals and/or communication between members of the reproductive pair inside the territory. In this context, this study aimed: 1) to compare four methods for detecting maned wolf’s roar-barks in recordings made in a natural environment, in order to elect the most efficient one for our project; 2) to understand the night emission pattern of these vocalizations, verifying possible weather and moon phases influences in roarbark’s emission rates; and 3) to test Passive Acoustic Monitoring as a tool to identify the presence of maned wolves in a natural environment. The study area was the Serra da Canastra National Park (Minas Gerais, Brazil), where autonomous recorders were used for sound acquisition, recording all night (from 06pm to 06am) during five days in December/2013 and every day from April to July/2014. Roar-barks’ detection methods were tested and compared regarding time needed to analyze files, number of false positives and number of correctly identified calls. The mixed method (XBAT + manual) was the most efficient one, finding 100% of vocalizations in almost half of the time the manual method did, being chosen for our data analysis. By studying roarbarks’ temporal variation we verified that the wolves vocalize more in the early hours of the evening, suggesting an important social function for those calls at the beginning of its period of most intense activity. Average wind speed negatively influenced vocalization rate, which may indicate lower sound reception of recorders or a change in behavioral patterns of wolves in high speed wind conditions. A better understanding of seasonal variation of maned wolves’ vocal activity is required, but our study already shows that it is possible to detect behavioral patterns of wild animals only by sound, validating PAM as a tool in this species’ conservation.
Resumo:
Home Automation holds the potential of realizing cost savings for end users while reducing the carbon footprint of domestic energy consumption. Yet, adoption is still very low. High cost of vendor-supplied home automation systems is a major prohibiting factor. Open source systems such as FHEM, Domoticz, OpenHAB etc. are a cheaper alternative and can drive the adoption of home automation. Moreover, they have the advantage of not being limited to a single vendor or communication technology which gives end users flexibility in the choice of devices to include in their installation. However, interaction with devices having diverse communication technologies can be inconvenient for users thus limiting the utility they derive from it. For application developers, creating applications which interact with the several technologies in the home automation systems is not a consistent process. Hence, there is the need for a common description mechanism that makes interaction smooth for end users and which enables application developers to make home automation applications in a consistent and uniform way. This thesis proposes such a description mechanism within the context of an open source home automation system – FHEM, together with a system concept for its application. A mobile application was developed as a proof of concept of the proposed description mechanism and the results of the implementation are reflected upon.
Resumo:
Este artículo explora diferentes tipos de apropiación de tecnologías mediáticas en las márgenes y propone un cambio en el acercamiento investigativo en diferentes niveles: 1) en lugar de centrarse en tecnologías individuales, la investigación sobre medios en las márgenes debe examinar cómo los/as comunicadores locales se desenvuelven en ecologías mediáticas que ofrecen recursos y retos específicos en cada situación histórica; 2) en lugar de tratar de determinar si las tecnologías mediáticas usadas en las márgenes son nuevas u obsoletas, digitales o no, es urgente comprender cómo los/as comunicadores asentados en lo local detectan necesidades de información y comunicación específicas y usan las tecnologías disponibles para abordar tales necesidades; 3) la investigación sobre medios en las márgenes debe esclarecer cómo las/los protagonistas de este tipo de comunicación ciudadana y comunitaria reinventan, hibridan, reciclan y tienden lazos entre plataformas tecnológicas. En resumen, para entender las tecnologías mediáticas en las márgenes la investigación debe asumir altos niveles de complejidad, debe mantener la noción de ecologías mediáticas y entender cómo, a nivel local, comunicadores comunitarios profundamente inmersos en lo cotidiano e histórico, ajustan las tecnologías mediáticas a las necesidades de sus comunidades.
Resumo:
La presente investigación busca analizar el impacto en los procesos organizacionales de las Instituciones de Educación Superior IES, por la adhesión voluntaria a Pacto Global mediante la verificación del cumplimiento de los principios a nivel documental y práctico. Esta iniciativa encuentra de gran valor la participación de la academia por su aporte a los aspectos críticos en las actividades, su contribución a la investigación, aprendizaje, recursos educativos, a la formación de líderes responsables, entre otros. En ese orden de ideas, el objetivo es conocer cuáles son los beneficios de la adopción de los diez principios relacionados con derechos humanos, estándares laborales, medio ambiente y anticorrupción en las IES, a partir de la revisión de los Comunicados de Progreso (COPs) y/o Comunicados de Involucramientos (COEs), trabajo de campo pertinente a la investigación. En una primera etapa se realizó el diagnóstico, basado en la revisión documental de los informes de cada IES activas en Pacto Global y de la verificación del cumplimiento de los compromisos descritos en los informes a través de las entrevistas realizadas en las IES seleccionadas. La segunda etapa consistió en evaluar en una matriz: el nivel de conocimiento, cumplimiento, motivación y efectividad de los resultados de cada IES. Los resultados obtenidos hacen referencia a la unificación de esfuerzos dentro de la organización, la transferencia del conocimiento, mayor visibilidad en la sociedad, entre otros. En definitiva, la adhesión a Pacto ha permitido que las IES sean organizaciones responsables al servicio de la comunidad educativa y empresarial.
Resumo:
The optimization of the pilot overhead in wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-usedblock fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with thenormalized Doppler frequency multiplied by the number of transmit antennas.