979 resultados para Multi-standard receiver
Resumo:
An experiment conducted in the field the action of mancozeb, a fungicide of multi-site action was tested, to control soybean rust caused by Phakopsora pachyrhizi. Its performance was compared to that of the mixture cyproconazole (DMI) + azoxystrobin (QoI). The soybean cultivar NA 7337 RR was used with a population of 400,000 plants/ha cultivated in 20m2 plots. Treatments consisted of mancozeb levels (1.5 and 2.0 kg/ha) applied four, six and eight times. The DMI + QoI mixture was applied three times at 0.3 L/ha + Nimbus. Rust severity was assessed six times in the plots and data were integrated as the area under the disease progress curve (AUDPC). The plots were harvested and grain yield was expressed as kg/ha. Data on AUDPC and yield were subjected to analysis of variance and means compared according to Turkey's test (p = 0.005). Treatments with mancozeb were superior to DMI + QoI mixture both for rust control and grain yield. Four applications of 2.0 k/ha mancozeb were more efficient than three applications of the mixture used as standard. Mancozeb has the potential to be added to fungicide mixtures in the establishment of soybean rust anti-resistance strategy.
Resumo:
Multi-country models have not been very successful in replicating important features of the international transmission of business cycles. Standard models predict cross-country correlations of output and consumption which are respectively too low and too high. In this paper, we build a multi-country model of the business cycle with multiple sectors in order to analyze the role of sectoral shocks in the international transmission of the business cycle. We find that a model with multiple sectors generates a higher cross-country correlation of output than standard one-sector models, and a lower cross-country correlation of consumption. In addition, it predicts cross-country correlations of employment and investment that are closer to the data than the standard model. We also analyze the relative effects of multiple sectors, trade in intermediate goods, imperfect substitution between domestic and foreign goods, home preference, capital adjustment costs, and capital depreciation on the international transmission of the business cycle.
Resumo:
This paper constructs and estimates a sticky-price, Dynamic Stochastic General Equilibrium model with heterogenous production sectors. Sectors differ in price stickiness, capital-adjustment costs and production technology, and use output from each other as material and investment inputs following an Input-Output Matrix and Capital Flow Table that represent the U.S. economy. By relaxing the standard assumption of symmetry, this model allows different sectoral dynamics in response to monetary policy shocks. The model is estimated by Simulated Method of Moments using sectoral and aggregate U.S. time series. Results indicate 1) substantial heterogeneity in price stickiness across sectors, with quantitatively larger differences between services and goods than previously found in micro studies that focus on final goods alone, 2) a strong sensitivity to monetary policy shocks on the part of construction and durable manufacturing, and 3) similar quantitative predictions at the aggregate level by the multi-sector model and a standard model that assumes symmetry across sectors.
Resumo:
The towed array electronics is essentially a multichannel real time data acquisition system. The major challenges involved in it are the simultaneous acquisition of data from multiple channels, telemetry of the data over tow cable (several kilometres in some systems) and synchronization with the onboard receiver for accurate reconstruction. A serial protocol is best suited to transmit the data to onboard electronics since number of wires inside the tow cable is limited. The best transmission medium for data over large distances is the optical fibre. In this a two step approach towards the realization of a reliable telemetry scheme for the sensor data using standard protocols is described. The two schemes are discussed in this paper. The first scheme is for conversion of parallel, time-multiplexed multi-sensor data to Ethernet. Existing towed arrays can be upgraded to ethernet using this scheme. Here the last lap of the transmission is by Ethernet over Fibre. For the next generation of towed arrays it is required to digitize and convert the data to ethernet close to the sensor. This is the second scheme. At the heart of this design is the Analog-to-Ethernet node. In addition to a more reliable interface, this helps in easier fault detection and firmware updates in the field for the towed arrays. The design challenges and considerations for incorporating a network of embedded devices within the array are highlighted
Resumo:
In this paper, different recovery methods applied at different network layers and time scales are used in order to enhance the network reliability. Each layer deploys its own fault management methods. However, current recovery methods are applied to only a specific layer. New protection schemes, based on the proposed partial disjoint path algorithm, are defined in order to avoid protection duplications in a multi-layer scenario. The new protection schemes also encompass shared segment backup computation and shared risk link group identification. A complete set of experiments proves the efficiency of the proposed methods in relation with previous ones, in terms of resources used to protect the network, the failure recovery time and the request rejection ratio
Resumo:
This paper focuses on QoS routing with protection in an MPLS network over an optical layer. In this multi-layer scenario each layer deploys its own fault management methods. A partially protected optical layer is proposed and the rest of the network is protected at the MPLS layer. New protection schemes that avoid protection duplications are proposed. Moreover, this paper also introduces a new traffic classification based on the level of reliability. The failure impact is evaluated in terms of recovery time depending on the traffic class. The proposed schemes also include a novel variation of minimum interference routing and shared segment backup computation. A complete set of experiments proves that the proposed schemes are more efficient as compared to the previous ones, in terms of resources used to protect the network, failure impact and the request rejection ratio
Resumo:
En les xarxes IP/MPLS sobre WDM on es transporta gran quantitat d'informacio, la capacitat de garantir que el trafic arriba al node de desti ha esdevingut un problema important, ja que la fallada d'un element de la xarxa pot resultar en una gran quantitat d'informacio perduda. Per garantir que el trafic afectat per una fallada arribi al node desti, s'han definit nous algoritmes d'encaminament que incorporen el coneixement de la proteccio en els dues capes: l'optica (WDM) i la basada en paquets (IP/MPLS). D'aquesta manera s'evita reservar recursos per protegir el trafic a les dues capes. Els nous algoritmes resulten en millor us dels recursos de la xarxa, ofereixen rapid temps de recuperacio, eviten la duplicacio de recursos i disminueixen el numero de conversions del trafic de senyal optica a electrica.
Resumo:
In this paper, the issues that arise in multi-organisational collaborative groups (MOCGs) in the public sector are discussed and how a technology-based group support system (GSS) could assist individuals within these groups. MOCGs are commonly used in the public sector to find solutions to multifaceted social problems. Finding solutions for such problems is difficult because their scope is outside the boundary of a single government agency. The standard approach to solving such problems is collaborative involving a diverse range of stakeholders. Collaborative working can be advantageous but it also introduces its own pressures. Conflicts can arise due to the multiple contexts and goals of group members and the organisations that they represent. Trust, communication and a shared interface are crucial to making any significant progress. A GSS could support these elements.
Resumo:
This study investigates the response of wintertime North Atlantic Oscillation (NAO) to increasing concentrations of atmospheric carbon dioxide (CO2) as simulated by 18 global coupled general circulation models that participated in phase 2 of the Coupled Model Intercomparison Project (CMIP2). NAO has been assessed in control and transient 80-year simulations produced by each model under constant forcing, and 1% per year increasing concentrations of CO2, respectively. Although generally able to simulate the main features of NAO, the majority of models overestimate the observed mean wintertime NAO index of 8 hPa by 5-10 hPa. Furthermore, none of the models, in either the control or perturbed simulations, are able to reproduce decadal trends as strong as that seen in the observed NAO index from 1970-1995. Of the 15 models able to simulate the NAO pressure dipole, 13 predict a positive increase in NAO with increasing CO2 concentrations. The magnitude of the response is generally small and highly model-dependent, which leads to large uncertainty in multi-model estimates such as the median estimate of 0.0061 +/- 0.0036 hPa per %CO2. Although an increase of 0.61 hPa in NAO for a doubling in CO2 represents only a relatively small shift of 0.18 standard deviations in the probability distribution of winter mean NAO, this can cause large relative increases in the probabilities of extreme values of NAO associated with damaging impacts. Despite the large differences in NAO responses, the models robustly predict similar statistically significant changes in winter mean temperature (warmer over most of Europe) and precipitation (an increase over Northern Europe). Although these changes present a pattern similar to that expected due to an increase in the NAO index, linear regression is used to show that the response is much greater than can be attributed to small increases in NAO. NAO trends are not the key contributor to model-predicted climate change in wintertime mean temperature and precipitation over Europe and the Mediterranean region. However, the models' inability to capture the observed decadal variability in NAO might also signify a major deficiency in their ability to simulate the NAO-related responses to climate change.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
In this paper, an improved stochastic discrimination (SD) is introduced to reduce the error rate of the standard SD in the context of multi-class classification problem. The learning procedure of the improved SD consists of two stages. In the first stage, a standard SD, but with shorter learning period is carried out to identify an important space where all the misclassified samples are located. In the second stage, the standard SD is modified by (i) restricting sampling in the important space; and (ii) introducing a new discriminant function for samples in the important space. It is shown by mathematical derivation that the new discriminant function has the same mean, but smaller variance than that of standard SD for samples in the important space. It is also analyzed that the smaller the variance of the discriminant function, the lower the error rate of the classifier. Consequently, the proposed improved SD improves standard SD by its capability of achieving higher classification accuracy. Illustrative examples axe provided to demonstrate the effectiveness of the proposed improved SD.
Resumo:
Since its introduction in 1993, the Message Passing Interface (MPI) has become a de facto standard for writing High Performance Computing (HPC) applications on clusters and Massively Parallel Processors (MPPs). The recent emergence of multi-core processor systems presents a new challenge for established parallel programming paradigms, including those based on MPI. This paper presents a new Java messaging system called MPJ Express. Using this system, we exploit multiple levels of parallelism - messaging and threading - to improve application performance on multi-core processors. We refer to our approach as nested parallelism. This MPI-like Java library can support nested parallelism by using Java or Java OpenMP (JOMP) threads within an MPJ Express process. Practicality of this approach is assessed by porting to Java a massively parallel structure formation code from Cosmology called Gadget-2. We introduce nested parallelism in the Java version of the simulation code and report good speed-ups. To the best of our knowledge it is the first time this kind of hybrid parallelism is demonstrated in a high performance Java application. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Multi-rate multicarrier DS/CDMA is a potentially attractive multiple access method for future wireless communications networks that must support multimedia, and thus multi-rate, traffic. Several receiver structures exist for single-rate multicarrier systems, but little has been reported on multi-rate multicarrier systems. Considering that high-performance detection such as coherent demodulation needs the explicit knowledge of the channel, based on the finite-length chip waveform truncation, this paper proposes a subspace-based scheme for timing and channel estimation in multi-rate multicarrier DS/CDMA systems, which is applicable to both multicode and variable spreading factor systems. The performance of the proposed scheme for these two multi-rate systems is validated via numerical simulations. The effects of the finite-length chip waveform truncation on the performance of the proposed scheme is also analyzed theoretically.
Resumo:
In 1997, the UK implemented the worlds first commercial digital terrestrial television system. Under the ETS 300 744 standard, the chosen modulation method, COFDM, is assumed to be multipath resilient. Previous work has shown that this is not necessarily the case. It has been shown that the local oscillator required for demodulation from intermediate-frequency to baseband must be very accurate. This paper shows that under multipath conditions, standard methods for obtaining local oscillator phase lock may not be adequate. This paper demonstrates a set of algorithms designed for use with a simple local oscillator circuit which will allow correction for local oscillator phase offset to maintain a low bit error rate with multipath present.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.