863 resultados para semantic conflict resolution
Resumo:
We investigate the ability of a global atmospheric general circulation model (AGCM) to reproduce observed 20 year return values of the annual maximum daily precipitation totals over the continental United States as a function of horizontal resolution. We find that at the high resolutions enabled by contemporary supercomputers, the AGCM can produce values of comparable magnitude to high quality observations. However, at the resolutions typical of the coupled general circulation models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, the precipitation return values are severely underestimated.
Resumo:
Carrier phase ambiguity resolution over long baselines is challenging in BDS data processing. This is partially due to the variations of the hardware biases in BDS code signals and its dependence on elevation angles. We present an assessment of satellite-induced code bias variations in BDS triple-frequency signals and the ambiguity resolutions procedures involving both geometry-free and geometry-based models. First, since the elevation of a GEO satellite remains unchanged, we propose to model the single-differenced fractional cycle bias with widespread ground stations. Second, the effects of code bias variations induced by GEO, IGSO and MEO satellites on ambiguity resolution of extra-wide-lane, wide-lane and narrow-lane combinations are analyzed. Third, together with the IGSO and MEO code bias variations models, the effects of code bias variations on ambiguity resolution are examined using 30-day data collected over the baselines ranging from 500 to 2600 km in 2014. The results suggest that although the effect of code bias variations on the extra-wide-lane integer solution is almost ignorable due to its long wavelength, the wide-lane integer solutions are rather sensitive to the code bias variations. Wide-lane ambiguity resolution success rates are evidently improved when code bias variations are corrected. However, the improvement of narrow-lane ambiguity resolution is not obvious since it is based on geometry-based model and there is only an indirect impact on the narrow-lane ambiguity solutions.
Resumo:
We consider systems composed of a base system with multiple “features” or “controllers”, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a way that guarantees the “maximal” use of each feature. The methodology is based on the notion of “conflict-tolerant” features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based composition scheme for such features, which ensures that each feature is maximally utilized. We also provide a formal framework for specifying, verifying, and synthesizing such features. In particular we obtain a compositional technique for verifying systems developed in this framework.
Resumo:
This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority based scheme for composing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.
Resumo:
[1] We have compared the spectral aerosol optical depth (AOD, tau lambda) and aerosol fine mode fraction (AFMF) of Collection 004 (C004) derived from Moderate-Resolution Imaging Spectroradiometer (MODIS) on board National Aeronautics and Space Administration's (NASA) Terra and Aqua platforms with that obtained from Aerosol Robotic Network (AERONET) at Kanpur (26.45 degrees N, 80.35 degrees E), India for the period 2001-2005. The spatially-averaged (0.5 degrees x 0.5 degrees centered at AERONET sunphotometer) MODIS Level-2 aerosol parameters (10 km at nadir) were compared with the temporally averaged AERONET-measured AOD (within +/- 30 minutes of MODIS overpass). We found that MODIS systematically overestimated AOD during the pre-monsoon season (March to June, known to be influenced by dust aerosols). The errors in AOD at 0.66 mu m were correlated with the apparent reflectance at 2.1 mu m (rho*(2.1)) which MODIS C004 uses to estimate the surface reflectance in the visible channels (rho(0.47) = rho*(2.1)/ 4, rho(0.66) = rho*(2.1)/ 2). The large errors in AOD (Delta tau(0.66) > 0.3) are found to be associated with the higher values of rho*(2.1) (0.18 to 0.25), where the uncertainty in the ratios of reflectance is large (Delta rho(0.66) +/- 0.04, Delta rho(0.47) +/- 0.02). This could have resulted in lower surface reflectance, higher aerosol path radiance and thus lead to overestimation in AOD. While MODIS-derived AFMF has binary distribution (1 or 0) with too low (AFMF < 0.2) during dust-loading period, and similar to 1 for the rest of the retrievals, AERONET showed range of values (0.4 to 0.9). The errors in tau(0.66) were also high in the scattering angle range 110 degrees - 140 degrees, where the optical effects of nonspherical dust particles are different from that of spherical particles.
Resumo:
Mobile applications are being increasingly deployed on a massive scale in various mobile sensor grid database systems. With limited resources from the mobile devices, how to process the huge number of queries from mobile users with distributed sensor grid databases becomes a critical problem for such mobile systems. While the fundamental semantic cache technique has been investigated for query optimization in sensor grid database systems, the problem is still difficult due to the fact that more realistic multi-dimensional constraints have not been considered in existing methods. To solve the problem, a new semantic cache scheme is presented in this paper for location-dependent data queries in distributed sensor grid database systems. It considers multi-dimensional constraints or factors in a unified cost model architecture, determines the parameters of the cost model in the scheme by using the concept of Nash equilibrium from game theory, and makes semantic cache decisions from the established cost model. The scenarios of three factors of semantic, time and locations are investigated as special cases, which improve existing methods. Experiments are conducted to demonstrate the semantic cache scheme presented in this paper for distributed sensor grid database systems.
Resumo:
An Ocean General Circulation Model of the Indian Ocean with high horizontal (0.25 degrees x 0.25 degrees) and vertical (40 levels) resolutions is used to study the dynamics and thermodynamics of the Arabian Sea mini warm pool (ASMWP), the warmest region in the northern Indian Ocean during January-April. The model simulates the seasonal cycle of temperature, salinity and currents as well as the winter time temperature inversions in the southeastern Arabian Sea (SEAS) quite realistically with climatological forcing. An experiment which maintained uniform salinity of 35 psu over the entire model domain reproduces the ASMWP similar to the control run with realistic salinity and this is contrary to the existing theories that stratification caused by the intrusion of low-salinity water from the Bay of Bengal into the SEAS is crucial for the formation of ASMWP. The contribution from temperature inversions to the warming of the SEAS is found to be negligible. Experiments with modified atmospheric forcing over the SEAS show that the low latent heat loss over the SEAS compared to the surroundings, resulting from the low winds due to the orographic effect of Western Ghats, plays an important role in setting up the sea surface temperature (SST) distribution over the SEAS during November March. During March-May, the SEAS responds quickly to the air-sea fluxes and the peak SST during April-May is independent of the SST evolution during previous months. The SEAS behaves as a low wind, heat-dominated regime during November-May and, therefore, the formation and maintenance of the ASMWP is not dependent on the near surface stratification.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America
Resumo:
In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot with-out environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robot’s behaviour during navigation tasks. The system is made available to the community as a ROS module.
Resumo:
In an estuary, mixing and dispersion resulting from turbulence and small scale fluctuation has strong spatio-temporal variability which cannot be resolved in conventional hydrodynamic models while some models employs parameterizations large water bodies. This paper presents small scale diffusivity estimates from high resolution drifters sampled at 10 Hz for periods of about 4 hours to resolve turbulence and shear diffusivity within a tidal shallow estuary (depth < 3 m). Taylor's diffusion theorem forms the basis of a first order estimate for the diffusivity scale. Diffusivity varied between 0.001 – 0.02 m2/s during the flood tide experiment. The diffusivity showed strong dependence (R2 > 0.9) on the horizontal mean velocity within the channel. Enhanced diffusivity caused by shear dispersion resulting from the interaction of large scale flow with the boundary geometries was observed. Turbulence within the shallow channel showed some similarities with the boundary layer flow which include consistency with slope of 5/3 predicted by Kolmogorov's similarity hypothesis within the inertial subrange. The diffusivities scale locally by 4/3 power law following Okubo's scaling and the length scale scales as 3/2 power law of the time scale. The diffusivity scaling herein suggests that the modelling of small scale mixing within tidal shallow estuaries can be approached from classical turbulence scaling upon identifying pertinent parameters.
Resumo:
In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.
Resumo:
This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time and hybrid systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict-tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based scheme forcomposing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.
Resumo:
Background Family law reforms in Australia require separated parents in dispute to attempt mandatory family dispute resolution (FDR) in community-based family services before court attendance. However, there are concerns about such services when clients present with a history of high conflict and family violence. This study protocol describes a longitudinal study of couples presenting for family mediation services. The study aims to describe the profile of family mediation clients, including type of family violence, and determine the impact of violence profiles on FDR processes and outcomes, such as the type and durability of shared parenting arrangements and clients’ satisfaction with mediated agreements. Methods A mixed method, naturalistic longitudinal design is used. The sampling frame is clients presenting at nine family mediation centres across metropolitan, outer suburban, and regional/rural sites in Victoria, Australia. Data are collected at pre-test, completion of mediation, and six months later. Self-administered surveys are administered at the three time points, and a telephone interview at the final post-test. The key study variable is family violence. Key outcome measures are changes in the type and level of acrimony and violent behaviours, the relationship between violence and mediated agreements, the durability of agreements over six months, and client satisfaction with mediation. Discussion Family violence is a major risk to the physical and mental health of women and children. This study will inform debates about the role of family violence and how to manage it in the family mediation context. It will also inform decision-making about mediation practices by better understanding how mediation impacts on parenting agreements, and the implications for children, especially in the context of family violence.
Resumo:
Family mediation is mandated in Australia for couples in dispute over separation and parenting as a first step in dispute resolution, except where there is a history of intimate partner violence. However, validation of effective well-differentiated partner violence screening instruments suitable for mediation settings is at an early phase of development. This study contributes to calls for better violence screening instruments in the mediation context to detect a differentiated range of abusive behaviors by examining the reliability and validity of both established scales, and newly developed scales that measured intimate partner violence by partner and by self. The study also aimed to examine relationships between types of abuse, and between gender and types of abuse. A third aim was to examine associations between types of abuse and other relationship indicators such as acrimony and parenting alliance. The data reported here are part of a larger mixed method, naturalistic longitudinal study of clients attending nine family mediation centers in Victoria, Australia. The current analyses on baseline cross-sectional screening data confirmed the reliability of three subscales of the Conflict Tactics Scale (CTS2), and the reliability and validity of three new scales measuring intimidation, controlling and jealous behavior, and financial control. Most clients disclosed a history of at least one type of violence by partner: 95% reported psychological aggression, 72% controlling and jealous behavior, 50% financial control, and 35% physical assault. Higher rates of abuse perpetration were reported by partner versus by self, and gender differences were identified. There were strong associations between certain patterns of psychologically abusive behavior and both acrimony and parenting alliance. The implications for family mediation services and future research are discussed.