867 resultados para Conflict (resolution)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Well known tariff reform rules that are guaranteed to increase welfare will not necessarily increase market access, while rules that are guaranteed to increase market access will not necessarily increase welfare. The present paper proposes a new set of tariff reforms that can achieve both objectives at the same time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subspace intersection method (SIM) provides unbiased bearing estimates of multiple acoustic sources in a range-independent shallow ocean using a one-dimensional search without prior knowledge of source ranges and depths. The original formulation of this method is based on deployment of a horizontal linear array of hydrophones which measure acoustic pressure. In this paper, we extend SIM to an array of acoustic vector sensors which measure pressure as well as all components of particle velocity. Use of vector sensors reduces the minimum number of sensors required by a factor of 4, and also eliminates the constraint that the intersensor spacing should not exceed half wavelength. The additional information provided by the vector sensors leads to performance enhancement in the form of lower estimation error and higher resolution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss a new technique to image the surfaces of metallic substrates using field emission from a pointed array of carbon nanotubes (CNTs). We consider a pointed height distribution of the CNT array under a diode configuration with two side gates maintained at a negative potential to obtain a highly intense beam of electrons localized at the center of the array. The CNT array on a metallic substrate is considered as the cathode and the test substrate as the anode. Scanning the test Substrate with the cathode reveals that the field emission current is highly sensitive to the surface features with nanometer resolution. Surface features of semi-circular, triangular and rectangular geometries (projections and grooves) are considered for simulation. This surface scanning/mapping technique can be applied for surface roughness measurements with nanoscale accuracy. micro/nano damage detection, high precision displacement sensors, vibrometers and accelerometers. among other applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the ability of a global atmospheric general circulation model (AGCM) to reproduce observed 20 year return values of the annual maximum daily precipitation totals over the continental United States as a function of horizontal resolution. We find that at the high resolutions enabled by contemporary supercomputers, the AGCM can produce values of comparable magnitude to high quality observations. However, at the resolutions typical of the coupled general circulation models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, the precipitation return values are severely underestimated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carrier phase ambiguity resolution over long baselines is challenging in BDS data processing. This is partially due to the variations of the hardware biases in BDS code signals and its dependence on elevation angles. We present an assessment of satellite-induced code bias variations in BDS triple-frequency signals and the ambiguity resolutions procedures involving both geometry-free and geometry-based models. First, since the elevation of a GEO satellite remains unchanged, we propose to model the single-differenced fractional cycle bias with widespread ground stations. Second, the effects of code bias variations induced by GEO, IGSO and MEO satellites on ambiguity resolution of extra-wide-lane, wide-lane and narrow-lane combinations are analyzed. Third, together with the IGSO and MEO code bias variations models, the effects of code bias variations on ambiguity resolution are examined using 30-day data collected over the baselines ranging from 500 to 2600 km in 2014. The results suggest that although the effect of code bias variations on the extra-wide-lane integer solution is almost ignorable due to its long wavelength, the wide-lane integer solutions are rather sensitive to the code bias variations. Wide-lane ambiguity resolution success rates are evidently improved when code bias variations are corrected. However, the improvement of narrow-lane ambiguity resolution is not obvious since it is based on geometry-based model and there is only an indirect impact on the narrow-lane ambiguity solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider systems composed of a base system with multiple “features” or “controllers”, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a way that guarantees the “maximal” use of each feature. The methodology is based on the notion of “conflict-tolerant” features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based composition scheme for such features, which ensures that each feature is maximally utilized. We also provide a formal framework for specifying, verifying, and synthesizing such features. In particular we obtain a compositional technique for verifying systems developed in this framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority based scheme for composing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[1] We have compared the spectral aerosol optical depth (AOD, tau lambda) and aerosol fine mode fraction (AFMF) of Collection 004 (C004) derived from Moderate-Resolution Imaging Spectroradiometer (MODIS) on board National Aeronautics and Space Administration's (NASA) Terra and Aqua platforms with that obtained from Aerosol Robotic Network (AERONET) at Kanpur (26.45 degrees N, 80.35 degrees E), India for the period 2001-2005. The spatially-averaged (0.5 degrees x 0.5 degrees centered at AERONET sunphotometer) MODIS Level-2 aerosol parameters (10 km at nadir) were compared with the temporally averaged AERONET-measured AOD (within +/- 30 minutes of MODIS overpass). We found that MODIS systematically overestimated AOD during the pre-monsoon season (March to June, known to be influenced by dust aerosols). The errors in AOD at 0.66 mu m were correlated with the apparent reflectance at 2.1 mu m (rho*(2.1)) which MODIS C004 uses to estimate the surface reflectance in the visible channels (rho(0.47) = rho*(2.1)/ 4, rho(0.66) = rho*(2.1)/ 2). The large errors in AOD (Delta tau(0.66) > 0.3) are found to be associated with the higher values of rho*(2.1) (0.18 to 0.25), where the uncertainty in the ratios of reflectance is large (Delta rho(0.66) +/- 0.04, Delta rho(0.47) +/- 0.02). This could have resulted in lower surface reflectance, higher aerosol path radiance and thus lead to overestimation in AOD. While MODIS-derived AFMF has binary distribution (1 or 0) with too low (AFMF < 0.2) during dust-loading period, and similar to 1 for the rest of the retrievals, AERONET showed range of values (0.4 to 0.9). The errors in tau(0.66) were also high in the scattering angle range 110 degrees - 140 degrees, where the optical effects of nonspherical dust particles are different from that of spherical particles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Ocean General Circulation Model of the Indian Ocean with high horizontal (0.25 degrees x 0.25 degrees) and vertical (40 levels) resolutions is used to study the dynamics and thermodynamics of the Arabian Sea mini warm pool (ASMWP), the warmest region in the northern Indian Ocean during January-April. The model simulates the seasonal cycle of temperature, salinity and currents as well as the winter time temperature inversions in the southeastern Arabian Sea (SEAS) quite realistically with climatological forcing. An experiment which maintained uniform salinity of 35 psu over the entire model domain reproduces the ASMWP similar to the control run with realistic salinity and this is contrary to the existing theories that stratification caused by the intrusion of low-salinity water from the Bay of Bengal into the SEAS is crucial for the formation of ASMWP. The contribution from temperature inversions to the warming of the SEAS is found to be negligible. Experiments with modified atmospheric forcing over the SEAS show that the low latent heat loss over the SEAS compared to the surroundings, resulting from the low winds due to the orographic effect of Western Ghats, plays an important role in setting up the sea surface temperature (SST) distribution over the SEAS during November March. During March-May, the SEAS responds quickly to the air-sea fluxes and the peak SST during April-May is independent of the SST evolution during previous months. The SEAS behaves as a low wind, heat-dominated regime during November-May and, therefore, the formation and maintenance of the ASMWP is not dependent on the near surface stratification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an estuary, mixing and dispersion resulting from turbulence and small scale fluctuation has strong spatio-temporal variability which cannot be resolved in conventional hydrodynamic models while some models employs parameterizations large water bodies. This paper presents small scale diffusivity estimates from high resolution drifters sampled at 10 Hz for periods of about 4 hours to resolve turbulence and shear diffusivity within a tidal shallow estuary (depth < 3 m). Taylor's diffusion theorem forms the basis of a first order estimate for the diffusivity scale. Diffusivity varied between 0.001 – 0.02 m2/s during the flood tide experiment. The diffusivity showed strong dependence (R2 > 0.9) on the horizontal mean velocity within the channel. Enhanced diffusivity caused by shear dispersion resulting from the interaction of large scale flow with the boundary geometries was observed. Turbulence within the shallow channel showed some similarities with the boundary layer flow which include consistency with slope of 5/3 predicted by Kolmogorov's similarity hypothesis within the inertial subrange. The diffusivities scale locally by 4/3 power law following Okubo's scaling and the length scale scales as 3/2 power law of the time scale. The diffusivity scaling herein suggests that the modelling of small scale mixing within tidal shallow estuaries can be approached from classical turbulence scaling upon identifying pertinent parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time and hybrid systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict-tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based scheme forcomposing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.