829 resultados para Adaptive Equalization. Neural Networks. Optic Systems. Neural Equalizer


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of elementary vector is generalised to the case where the steady-state space of the metabolic network is not a flux cone but is a general polyhedron due to further inhomogeneous constraints on the flows through some of the reactions. On one hand, this allows to selectively enumerate elementary modes which satisfy certain optimality criteria and this can yield a large computational gain compared with full enumeration. On the other hand, in contrast to the single optimum found by executing a linear program, this enables a comprehensive description of the set of alternate optima often encountered in flux balance analysis. The concepts are illustrated on a metabolic network model of human cardiac mitochondria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO (multiple-input multiple-output) systems, have become indispensable for future generations of wireless systems. Due to the strongly increasing demand in high-data rate transmission systems, frequency non-selective MIMO links have reached a state of maturity and frequency selective MIMO links are in the focus of interest. In this field, the combination of MIMO transmission and OFDM (orthogonal frequency division multiplexing) can be considered as an essential part of fulfilling the requirements of future generations of wireless systems. However, single-user scenarios have reached a state of maturity. By contrast multiple users’ scenarios require substantial further research, where in comparison to ZF (zero-forcing) multiuser transmission techniques, the individual user’s channel characteristics are taken into consideration in this contribution. The performed joint optimization of the number of activated MIMO layers and the number of transmitted bits per subcarrier along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers per subcarrier have to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO (multiple-input multiple-output) systems, have become indispensable for future generations of wireless systems. Due to the strongly increasing demand in high-data rate transmission systems, frequency non-selective MIMO links have reached a state of maturity and frequency selective MIMO links are in the focus of interest. In this field, the combination of MIMO transmission and OFDM (orthogonal frequency division multiplexing) can be considered as an essential part of fulfilling the requirements of future generations of wireless systems. However, single-user scenarios have reached a state of maturity. By contrast multiple users’ scenarios require substantial further research, where in comparison to ZF (zero-forcing) multiuser transmission techniques, the individual user’s channel characteristics are taken into consideration in this contribution. The performed joint optimization of the number of activated MIMO layers and the number of transmitted bits per subcarrier along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers per subcarrier have to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El consumo energético de las Redes de Sensores Inalámbricas (WSNs por sus siglas en inglés) es un problema histórico que ha sido abordado desde diferentes niveles y visiones, ya que no solo afecta a la propia supervivencia de la red sino que el creciente uso de dispositivos inteligentes y el nuevo paradigma del Internet de las Cosas hace que las WSNs tengan cada vez una mayor influencia en la huella energética. Debido a la tendencia al alza en el uso de estas redes se añade un nuevo problema, la saturación espectral. Las WSNs operan habitualmente en bandas sin licencia como son las bandas Industrial, Científica y Médica (ISM por sus siglas en inglés). Estas bandas se comparten con otro tipo de redes como Wi-Fi o Bluetooth cuyo uso ha crecido exponencialmente en los últimos años. Para abordar este problema aparece el paradigma de la Radio Cognitiva (CR), una tecnología que permite el acceso oportunista al espectro. La introducción de capacidades cognitivas en las WSNs no solo permite optimizar su eficiencia espectral sino que también tiene un impacto positivo en parámetros como la calidad de servicio, la seguridad o el consumo energético. Sin embargo, por otra parte, este nuevo paradigma plantea algunos retos relacionados con el consumo energético. Concretamente, el sensado del espectro, la colaboración entre los nodos (que requiere comunicación adicional) y el cambio en los parámetros de transmisión aumentan el consumo respecto a las WSN clásicas. Teniendo en cuenta que la investigación en el campo del consumo energético ha sido ampliamente abordada puesto que se trata de una de sus principales limitaciones, asumimos que las nuevas estrategias deben surgir de las nuevas capacidades añadidas por las redes cognitivas. Por otro lado, a la hora de diseñar estrategias de optimización para CWSN hay que tener muy presentes las limitaciones de recursos de estas redes en cuanto a memoria, computación y consumo energético de los nodos. En esta tesis doctoral proponemos dos estrategias de reducción de consumo energético en CWSNs basadas en tres pilares fundamentales. El primero son las capacidades cognitivas añadidas a las WSNs que proporcionan la posibilidad de adaptar los parámetros de transmisión en función del espectro disponible. La segunda es la colaboración, como característica intrínseca de las CWSNs. Finalmente, el tercer pilar de este trabajo es teoría de juegos como algoritmo de soporte a la decisión, ampliamente utilizado en WSNs debido a su simplicidad. Como primer aporte de la tesis se presenta un análisis completo de las posibilidades introducidas por la radio cognitiva en materia de reducción de consumo para WSNs. Gracias a las conclusiones extraídas de este análisis, se han planteado las hipótesis de esta tesis relacionadas con la validez de usar capacidades cognitivas como herramienta para la reducción de consumo en CWSNs. Una vez presentada las hipótesis, pasamos a desarrollar las principales contribuciones de la tesis: las dos estrategias diseñadas para reducción de consumo basadas en teoría de juegos y CR. La primera de ellas hace uso de un juego no cooperativo que se juega mediante pares de jugadores. En la segunda estrategia, aunque el juego continúa siendo no cooperativo, se añade el concepto de colaboración. Para cada una de las estrategias se presenta el modelo del juego, el análisis formal de equilibrios y óptimos y la descripción de la estrategia completa donde se incluye la interacción entre nodos. Con el propósito de probar las estrategias mediante simulación e implementación en dispositivos reales hemos desarrollado un marco de pruebas compuesto por un simulador cognitivo y un banco de pruebas formado por nodos cognitivos capaces de comunicarse en tres bandas ISM desarrollados en el B105 Lab. Este marco de pruebas constituye otra de las aportaciones de la tesis que permitirá el avance en la investigación en el área de las CWSNs. Finalmente, se presentan y discuten los resultados derivados de la prueba de las estrategias desarrolladas. La primera estrategia proporciona ahorros de energía mayores al 65% comparados con una WSN sin capacidades cognitivas y alrededor del 25% si la comparamos con una estrategia cognitiva basada en el sensado periódico del espectro para el cambio de canal de acuerdo a un nivel de ruido fijado. Este algoritmo se comporta de forma similar independientemente del nivel de ruido siempre que éste sea espacialmente uniformemente. Esta estrategia, a pesar de su sencillez, nos asegura el comportamiento óptimo en cuanto a consumo energético debido a la utilización de teoría de juegos en la fase de diseño del comportamiento de los nodos. La estrategia colaborativa presenta mejoras respecto a la anterior en términos de protección frente al ruido en escenarios de ruido más complejos donde aporta una mejora del 50% comparada con la estrategia anterior. ABSTRACT Energy consumption in Wireless Sensor Networks (WSNs) is a known historical problem that has been addressed from different areas and on many levels. But this problem should not only be approached from the point of view of their own efficiency for survival. A major portion of communication traffic has migrated to mobile networks and systems. The increased use of smart devices and the introduction of the Internet of Things (IoT) give WSNs a great influence on the carbon footprint. Thus, optimizing the energy consumption of wireless networks could reduce their environmental impact considerably. In recent years, another problem has been added to the equation: spectrum saturation. Wireless Sensor Networks usually operate in unlicensed spectrum bands such as Industrial, Scientific, and Medical (ISM) bands shared with other networks (mainly Wi-Fi and Bluetooth). To address the efficient spectrum utilization problem, Cognitive Radio (CR) has emerged as the key technology that enables opportunistic access to the spectrum. Therefore, the introduction of cognitive capabilities to WSNs allows optimizing their spectral occupation. Cognitive Wireless Sensor Networks (CWSNs) do not only increase the reliability of communications, but they also have a positive impact on parameters such as the Quality of Service (QoS), network security, or energy consumption. These new opportunities introduced by CWSNs unveil a wide field in the energy consumption research area. However, this also implies some challenges. Specifically, the spectrum sensing stage, collaboration among devices (which requires extra communication), and changes in the transmission parameters increase the total energy consumption of the network. When designing CWSN optimization strategies, the fact that WSN nodes are very limited in terms of memory, computational power, or energy consumption has to be considered. Thus, light strategies that require a low computing capacity must be found. Since the field of energy conservation in WSNs has been widely explored, we assume that new strategies could emerge from the new opportunities presented by cognitive networks. In this PhD Thesis, we present two strategies for energy consumption reduction in CWSNs supported by three main pillars. The first pillar is that cognitive capabilities added to the WSN provide the ability to change the transmission parameters according to the spectrum. The second pillar is that the ability to collaborate is a basic characteristic of CWSNs. Finally, the third pillar for this work is the game theory as a decision-making algorithm, which has been widely used in WSNs due to its lightness and simplicity that make it valid to operate in CWSNs. For the development of these strategies, a complete analysis of the possibilities is first carried out by incorporating the cognitive abilities into the network. Once this analysis has been performed, we expose the hypotheses of this thesis related to the use of cognitive capabilities as a useful tool to reduce energy consumption in CWSNs. Once the analyses are exposed, we present the main contribution of this thesis: the two designed strategies for energy consumption reduction based on game theory and cognitive capabilities. The first one is based on a non-cooperative game played between two players in a simple and selfish way. In the second strategy, the concept of collaboration is introduced. Despite the fact that the game used is also a non-cooperative game, the decisions are taken through collaboration. For each strategy, we present the modeled game, the formal analysis of equilibrium and optimum, and the complete strategy describing the interaction between nodes. In order to test the strategies through simulation and implementation in real devices, we have developed a CWSN framework composed by a CWSN simulator based on Castalia and a testbed based on CWSN nodes able to communicate in three different ISM bands. We present and discuss the results derived by the energy optimization strategies. The first strategy brings energy improvement rates of over 65% compared to WSN without cognitive techniques. It also brings energy improvement rates of over 25% compared with sensing strategies for changing channels based on a decision threshold. We have also seen that the algorithm behaves similarly even with significant variations in the level of noise while working in a uniform noise scenario. The collaborative strategy presents improvements respecting the previous strategy in terms of noise protection when the noise scheme is more complex where this strategy shows improvement rates of over 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEC Technical Committee 57 (TC57) published a series of standards and technical reports for “Communication networks and systems for power utility automation” as the IEC 61850 series. Sampled value (SV) process buses allow for the removal of potentially lethal voltages and damaging currents inside substation control rooms and marshalling kiosks, reduce the amount of cabling required in substations, and facilitate the adoption of non-conventional instrument transformers. IEC 61850-9-2 provides an inter-operable solution to support multi-vendor process bus solutions. A time synchronisation system is required for a SV process bus, however the details are not defined in IEC 61850-9-2. IEEE Std 1588-2008, Precision Time Protocol version 2 (PTPv2), provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. PTPv2 is proposed by the IEC Smart Grid Strategy Group to synchronise IEC 61850 based substation automation systems. IEC 61850-9-2, PTPv2 and Ethernet are three complementary protocols that together define the future of sampled value digital process connections in substations. The suitability of PTPv2 for use with SV is evaluated, with preliminary results indicating that steady state performance is acceptable (jitter < 300 ns), and that extremely stable grandmaster oscillators are required to ensure SV timing requirements are met when recovering from loss of external synchronisation (such as GPS).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proposed transmission smart grids will use a digital platform for the automation of substations operating at voltage levels of 110 kV and above. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850-8-1 and IEC 61850-9-2 provide an inter-operable solution to support multi-vendor digital process bus solutions, allowing for the removal of potentially lethal voltages and damaging currents from substation control rooms, a reduction in the amount of cabling required in substations, and facilitates the adoption of non-conventional instrument transformers (NCITs). IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. This paper describes a specific test and evaluation system that uses real time simulation, protection relays, PTPv2 time clocks and artificial network impairment that is being used to investigate technical impediments to the adoption of SV process bus systems by transmission utilities. Knowing the limits of a digital process bus, especially when sampled values and NCITs are included, will enable utilities to make informed decisions regarding the adoption of this technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transmission smart grids will use a digital platform for the automation of high voltage substations. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. A time synchronisation system is required for a sampled value process bus, however the details are not defined in IEC 61850-9-2. PTPv2 provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. The suitability of PTPv2 to synchronise sampling in a digital process bus is evaluated, with preliminary results indicating that steady state performance of low cost clocks is an acceptable ±300 ns, but that corrections issued by grandmaster clocks can introduce significant transients. Extremely stable grandmaster oscillators are required to ensure any corrections are sufficiently small that time synchronising performance is not degraded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of degradations in adaptive digital beam-forming (DBF) systems caused by mutual coupling between array elements. The focus is on compact arrays with reduced element spacing and, hence, strongly coupled elements. Deviations in the radiation patterns of coupled and (theoretically) uncoupled elements can be compensated for by weight-adjustments in DBF, but SNR degradation due to impedance mismatches cannot be compensated for via signal processing techniques. It is shown that this problem can be overcome via the implementation of a RF-decoupling-network. SNR enhancement is achieved at the cost of a reduced frequency bandwidth and an increased sensitivity to dissipative losses in the antenna and matching network structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overarching aim of this programme of work was to evaluate the effectiveness of the existing learning environment within the Australian Institute of Sport (AIS) elite springboard diving programme. Unique to the current research programme, is the application of ideas from an established theory of motor learning, specifically ecological dynamics, to an applied high performance training environment. In this research programme springboard diving is examined as a complex system, where individual, task, and environmental constraints are continually interacting to shape performance. As a consequence, this thesis presents some necessary and unique insights into representative learning design and movement adaptations in a sample of elite athletes. The questions examined in this programme of work relate to how best to structure practice, which is central to developing an effective learning environment in a high performance setting. Specifically, the series of studies reported in the chapters of this doctoral thesis: (i) provide evidence for the importance of designing representative practice tasks in training; (ii) establish that completed and baulked (prematurely terminated) take-offs are not different enough to justify the abortion of a planned dive; and (iii), confirm that elite athletes performing complex skills are able to adapt their movement patterns to achieve consistent performance outcomes from variable dive take-off conditions. Chapters One and Two of the thesis provide an overview of the theoretical ideas framing the programme of work, and include a review of literature pertinent to the research aims and subsequent empirical chapters. Chapter Three examined the representativeness of take-off tasks completed in the two AIS diving training facilities routinely used in springboard diving. Results highlighted differences in the preparatory phase of reverse dive take-offs completed by elite divers during normal training tasks in the dry-land and aquatic training environments. The most noticeable differences in dive take-off between environments began during the hurdle (step, jump, height and flight) where the diver generates the necessary momentum to complete the dive. Consequently, greater step lengths, jump heights and flight times, resulted in greater board depression prior to take-off in the aquatic environment where the dives required greater amounts of rotation. The differences observed between the preparatory phases of reverse dive take-offs completed in the dry-land and aquatic training environments are arguably a consequence of the constraints of the training environment. Specifically, differences in the environmental information available to the athletes, and the need to alter the landing (feet first vs. wrist first landing) from the take-off, resulted in a decoupling of important perception and action information and a decomposition of the dive take-off task. In attempting to only practise high quality dives, many athletes have followed a traditional motor learning approach (Schmidt, 1975) and tried to eliminate take-off variations during training. Chapter Four examined whether observable differences existed between the movement kinematics of elite divers in the preparation phases of baulked (prematurely terminated) and completed take-offs that might justify this approach to training. Qualitative and quantitative analyses of variability within conditions revealed greater consistency and less variability when dives were completed, and greater variability amongst baulked take-offs for all participants. Based on these findings, it is probable that athletes choose to abort a planned take-off when they detect small variations from the movement patterns (e.g., step lengths, jump height, springboard depression) of highly practiced comfortable dives. However, with no major differences in coordination patterns (topology of the angle-angle plots), and the potential for negative performance outcomes in competition, there appears to be no training advantage in baulking on unsatisfactory take-offs during training, except when a threat of injury is perceived by the athlete. Instead, it was considered that enhancing the athletes' movement adaptability would be a more functional motor learning strategy. In Chapter Five, a twelve-week training programme was conducted to determine whether a sample of elite divers were able to adapt their movement patterns and complete dives successfully, regardless of the perceived quality of their preparatory movements on the springboard. The data indeed suggested that elite divers were able to adapt their movements during the preparatory phase of the take-off and complete good quality dives under more varied take-off conditions; displaying greater consistency and stability in the key performance outcome (dive entry). These findings are in line with previous research findings from other sports (e.g., shooting, triple jump and basketball) and demonstrate how functional or compensatory movement variability can afford greater flexibility in task execution. By previously only practising dives with good quality take-offs, it can be argued that divers only developed strong couplings between information and movement under very specific performance circumstances. As a result, this sample was sometimes characterised by poor performance in competition when the athletes experienced a suboptimal take-off. Throughout this training programme, where divers were encouraged to minimise baulking and attempt to complete every dive, they demonstrated that it was possible to strengthen the information and movement coupling in a variety of performance circumstances, widening of the basin of performance solutions and providing alternative couplings to solve a performance problem even when the take-off was not ideal. The results of this programme of research provide theoretical and experimental implications for understanding representative learning design and movement pattern variability in applied sports science research. Theoretically, this PhD programme contributes empirical evidence to demonstrate the importance of representative design in the training environments of high performance sports programmes. Specifically, this thesis advocates for the design of learning environments that effectively capture and enhance functional and flexible movement responses representative of performance contexts. Further, data from this thesis showed that elite athletes performing complex tasks were able to adapt their movements in the preparatory phase and complete good quality dives under more varied take-off conditions. This finding signals some significant practical implications for athletes, coaches and sports scientists. As such, it is recommended that care should be taken by coaches when designing practice tasks since the clear implication is that athletes need to practice adapting movement patterns during ongoing regulation of multi-articular coordination tasks. For example, volleyball servers can adapt to small variations in the ball toss phase, long jumpers can visually regulate gait as they prepare for the take-off, and springboard divers need to continue to practice adapting their take-off from the hurdle step. In summary, the studies of this programme of work have confirmed that the task constraints of training environments in elite sport performance programmes need to provide a faithful simulation of a competitive performance environment in order that performance outcomes may be stabilised with practice. Further, it is apparent that training environments can be enhanced by ensuring the representative design of task constraints, which have high action fidelity with the performance context. Ultimately, this study recommends that the traditional coaching adage 'perfect practice makes perfect", be reconsidered; instead advocating that practice should be, as Bernstein (1967) suggested, "repetition without repetition".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today’s economy is a knowledge-based economy in which knowledge is a crucial facilitator to individuals, as well as being an instigator of success. Due to the impact of globalisation, universities face new challenges and opportunities. Accordingly, they ought to be more innovative and have their own competitive advantages. One of the most important goals of universities is the promotion of students as professional knowledge workers. Therefore, knowledge sharing and transfer at the tertiary level between students and supervisors is vital in universities, as it decreases the budget and provides an affordable way to do research. Knowledge-sharing impact factors can be categorised in three groups, namely: organisational, individual, and technical factors. Individual barriers to knowledge sharing include: the lack of time and trust and the lack of communication skills and social networks. IT systems such as elearning, blogs and portals can increase the knowledge-sharing capability. However, it must be stated that IT systems are only tools and not solutions. Individuals are still responsible for sharing information and knowledge. This paper proposes a new research model to examine the effect of individual factors, organisational factors (learning strategy, trust culture, supervisory support) and technological factors on knowledge sharing in the research supervision process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article analyses co-movements in a wide group of commodity prices during the time period 1992–2010. Our methodological approach is based on the correlation matrix and the networks inside. Through this approach we are able to summarize global interaction and interdependence, capturing the existing heterogeneity in the degrees of synchronization between commodity prices. Our results produce two main findings: (a) we do not observe a persistent increase in the degree of co-movement of the commodity prices in our time sample, however from mid-2008 to the end of 2009 co-movements almost doubled when compared with the average correlation; (b) we observe three groups of commodities which have exhibited similar price dynamics (metals, oil and grains, and oilseeds) and which have increased their degree of co-movement during the sampled period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Africa is threatened by climate change. The adaptive capacity of local communities continues to be weakened by ineffective and inefficient livelihood strategies and inappropriate development interventions. One of the greatest challenges for climate change adaptation in Africa is related to the governance of natural resources used by vulnerable poor groups as assets for adaptation. Practical and good governance activities for adaptation in Africa is urgently and much needed to support adaptation actions, interventions and planning. The adaptation role of forests has not been as prominent in the international discourse and actions as their mitigation role. This study therefore focused on the forest as one of the natural resources used for adaptation. The general objective of this research was to assess the extent to which cases of current forest governance practices in four African countries Burkina Faso, The Democratic Republic of Congo (DRC), Ghana and Sudan are supportive to the adaptation of vulnerable societies and ecosystems to impacts of climate change. Qualitative and quantitative analyses from surveys, expert consultations and group discussions were used in analysing the case studies. The entire research was guided by three conceptual sets of thinking forest governance, climate change vulnerability and ecosystem services. Data for the research were collected from selected ongoing forestry activities and programmes. The study mainly dealt with forest management policies and practices that can improve the adaptation of forest ecosystems (Study I) and the adaptive capacity through the management of forest resources by vulnerable farmers (Studies II, III, IV and V). It was found that adaptation is not part of current forest policies, but, instead, policies contain elements of risk management practices, which are also relevant to the adaptation of forest ecosystems. These practices include, among others, the management of forest fires, forest genetic resources, non-timber resources and silvicultural practices. Better livelihood opportunities emerged as the priority for the farmers. These vulnerable farmers had different forms of forest management. They have a wide range of experience and practical knowledge relevant to ensure and achieve livelihood improvement alongside sustainable management and good governance of natural resources. The contributions of traded non-timber forest products to climate change adaptation appear limited for local communities, based on their distribution among the stakeholders in the market chain. Plantation (agro)forestry, if well implemented and managed by communities, has a high potential in reducing socio-ecological vulnerability by increasing the food production and restocking degraded forest lands. Integration of legal arrangements with continuous monitoring, evaluation and improvement may drive this activity to support short, medium and long term expectations related to adaptation processes. The study concludes that effective forest governance initiatives led by vulnerable poor groups represent one practical way to improve the adaptive capacities of socio-ecological systems against the impacts of climate change in Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present paper, the ultrasonic strain sensing performance of large-area piezoceramic coating with Inter Digital Transducer (IDT) electrodes is studied. The piezoceramic coating is prepared using slurry coating technique and the piezoelectric phase is achieved by poling under DC field. To study the sensing performance of the piezoceramic coating with IDT electrodes for strain induced by the guided waves, the piezoceramic coating is fabricated on the surface of a beam specimen at one end and the ultrasonic guided waves are launched with a piezoelectric wafer bonded on another end. Often a wider frequency band of operation is needed for the effective implementation of the sensors in the Structural Health Monitoring (SHM) of various structures, for different types of damages. A wider frequency band of operation is achieved in the present study by considering the variation in the number of IDT electrodes in the contribution of voltage for the induced dynamic strain. In the present work, the fabricated piezoceramic coatings with IDT electrodes have been characterized for dynamic strain sensing applications using guided wave technique at various different frequencies. Strain levels of the launched guided wave are varied by varying the magnitude of the input voltage sent to the actuator. Sensitivity variation with the variation in the strain levels of guided wave is studied for the combination of different number of IDT electrodes. Piezoelectric coefficient e(11) is determined at different frequencies and at different strain levels using the guided wave technique.