890 resultados para problem of mediation
Resumo:
Numerical climate models constitute the best available tools to tackle the problem of climate prediction. Two assumptions lie at the heart of their suitability: (1) a climate attractor exists, and (2) the numerical climate model's attractor lies on the actual climate attractor, or at least on the projection of the climate attractor on the model's phase space. In this contribution, the Lorenz '63 system is used both as a prototype system and as an imperfect model to investigate the implications of the second assumption. By comparing results drawn from the Lorenz '63 system and from numerical weather and climate models, the implications of using imperfect models for the prediction of weather and climate are discussed. It is shown that the imperfect model's orbit and the system's orbit are essentially different, purely due to model error and not to sensitivity to initial conditions. Furthermore, if a model is a perfect model, then the attractor, reconstructed by sampling a collection of initialised model orbits (forecast orbits), will be invariant to forecast lead time. This conclusion provides an alternative method for the assessment of climate models.
Resumo:
Long Term Evolution based networks lack native support for Circuit Switched (CS) services. The Evolved Packet System (EPS) which includes the Evolved UMTS Terrestrial Radio Access Network (E-UTRAN) and Evolved Packet Core (EPC) is a purely all-IP packet system. This introduces the problem of how to provide voice call support when a user is within an LTE network and how to ensure voice service continuity when the user moves out of LTE coverage area. Different technologies have been proposed for the purpose of providing a voice to LTE users and to ensure the service continues outside LTE networks. The aim of this paper is to analyze and evaluate the overall performance of these technologies along with Single Radio Voice Call Continuity (SRVCC) Inter-RAT handover to Universal Terrestrial Radio Access Networks/ GSM-EDGE radio access Networks (UTRAN/GERAN). The possible solutions for providing voice call and service continuity over LTE-based networks are Circuit Switched Fall Back (CSFB), Voice over LTE via Generic Access (VoLGA), Voice over LTE (VoLTE) based on IMS/MMTel with SRVCC and Over The Top (OTT) services like Skype. This paper focuses mainly on the 3GPP standard solutions to implement voice over LTE. The paper compares various aspects of these solutions and suggests a possible roadmap that mobile operators can adopt to provide seamless voice over LTE.
Resumo:
In this paper we propose and analyse a hybrid numerical-asymptotic boundary element method for the solution of problems of high frequency acoustic scattering by a class of sound-soft nonconvex polygons. The approximation space is enriched with carefully chosen oscillatory basis functions; these are selected via a study of the high frequency asymptotic behaviour of the solution. We demonstrate via a rigorous error analysis, supported by numerical examples, that to achieve any desired accuracy it is sufficient for the number of degrees of freedom to grow only in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods. This appears to be the first such numerical analysis result for any problem of scattering by a nonconvex obstacle. Our analysis is based on new frequency-explicit bounds on the normal derivative of the solution on the boundary and on its analytic continuation into the complex plane.
Resumo:
The problem of technology obsolescence in information intensive businesses (software and hardware no longer being supported and replaced by improved and different solutions) and a cost constrained market can severely increase costs and operational, and ultimately reputation risk. Although many businesses recognise technological obsolescence, the pervasive nature of technology often means they have little information to identify the risk and location of pending obsolescence and little money to apply to the solution. This paper presents a low cost structured method to identify obsolete software and the risk of their obsolescence where the structure of a business and its supporting IT resources can be captured, modelled, analysed and the risk to the business of technology obsolescence identified to enable remedial action using qualified obsolescence information. The technique is based on a structured modelling approach using enterprise architecture models and a heatmap algorithm to highlight high risk obsolescent elements. The method has been tested and applied in practice in two consulting studies carried out by Capgemini involving three UK police forces. However the generic technique could be applied to any industry based on plans to improve it using ontology framework methods. This paper contains details of enterprise architecture meta-models and related modelling.
Resumo:
There is a strong drive towards hyperresolution earth system models in order to resolve finer scales of motion in the atmosphere. The problem of obtaining more realistic representation of terrestrial fluxes of heat and water, however, is not just a problem of moving to hyperresolution grid scales. It is much more a question of a lack of knowledge about the parameterisation of processes at whatever grid scale is being used for a wider modelling problem. Hyperresolution grid scales cannot alone solve the problem of this hyperresolution ignorance. This paper discusses these issues in more detail with specific reference to land surface parameterisations and flood inundation models. The importance of making local hyperresolution model predictions available for evaluation by local stakeholders is stressed. It is expected that this will be a major driving force for improving model performance in the future. Keith BEVEN, Hannah CLOKE, Florian PAPPENBERGER, Rob LAMB, Neil HUNTER
Resumo:
The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.
Resumo:
We propose a topological approach to the problem of determining a curve from its iterated integrals. In particular, we prove that a family of terms in the signature series of a two dimensional closed curve with finite p-variation, 1≤p<2, are in fact moments of its winding number. This relation allows us to prove that the signature series of a class of simple non-smooth curves uniquely determine the curves. This implies that outside a Chordal SLEκ null set, where 0<κ≤4, the signature series of curves uniquely determine the curves. Our calculations also enable us to express the Fourier transform of the n-point functions of SLE curves in terms of the expected signature of SLE curves. Although the techniques used in this article are deterministic, the results provide a platform for studying SLE curves through the signatures of their sample paths.
Resumo:
In this article we assess the abilities of a new electromagnetic (EM) system, the CMD Mini-Explorer, for prospecting of archaeological features in Ireland and the UK. The Mini-Explorer is an EM probe which is primarily aimed at the environmental/geological prospecting market for the detection of pipes and geology. It has long been evident from the use of other EM devices that such an instrument might be suitable for shallow soil studies and applicable for archaeological prospecting. Of particular interest for the archaeological surveyor is the fact that the Mini-Explorer simultaneously obtains both quadrature (‘conductivity’) and in-phase (relative to ‘magnetic susceptibility’) data from three depth levels. As the maximum depth range is probably about 1.5 m, a comprehensive analysis of the subsoil within that range is possible. As with all EM devices the measurements require no contact with the ground, thereby negating the problem of high contact resistance that often besets earth resistance data during dry spells. The use of the CMD Mini-Explorer at a number of sites has demonstrated that it has the potential to detect a range of archaeological features and produces high-quality data that are comparable in quality to those obtained from standard earth resistance and magnetometer techniques. In theory the ability to measure two phenomena at three depths suggests that this type of instrument could reduce the number of poor outcomes that are the result of single measurement surveys. The high success rate reported here in the identification of buried archaeology using a multi-depth device that responds to the two most commonly mapped geophysical phenomena has implications for evaluation style surveys. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
This paper uses the last few decades’ developments in the area of shared parenting to explore power within the framework of autopoietic theory. It traces how, prompted by turbulence from the political subsystem, family law has made several unsuccessful attempts to solve the perceived problem of post-separation dual-household parenting. It agrees with Luhmann and Teubner that closed autopoietic systems’ developments are limited by their normative and cognitive frameworks, and also argues that changes, which have occurred in family law, show that closed social systems do not function in total isolation. It considers power as ego’s ability to limit alter’s choices. In our functionally differentiated society, with its recent proliferation of communication, power appears more diffuse and impossible to plot into causal one-way relationships.
Resumo:
Anti-spoofing is attracting growing interest in biometrics, considering the variety of fake materials and new means to attack biometric recognition systems. New unseen materials continuously challenge state-of-the-art spoofing detectors, suggesting for additional systematic approaches to target anti-spoofing. By incorporating liveness scores into the biometric fusion process, recognition accuracy can be enhanced, but traditional sum-rule based fusion algorithms are known to be highly sensitive to single spoofed instances. This paper investigates 1-median filtering as a spoofing-resistant generalised alternative to the sum-rule targeting the problem of partial multibiometric spoofing where m out of n biometric sources to be combined are attacked. Augmenting previous work, this paper investigates the dynamic detection and rejection of livenessrecognition pair outliers for spoofed samples in true multi-modal configuration with its inherent challenge of normalisation. As a further contribution, bootstrap aggregating (bagging) classifiers for fingerprint spoof-detection algorithm is presented. Experiments on the latest face video databases (Idiap Replay- Attack Database and CASIA Face Anti-Spoofing Database), and fingerprint spoofing database (Fingerprint Liveness Detection Competition 2013) illustrate the efficiency of proposed techniques.
Resumo:
Regional information on climate change is urgently needed but often deemed unreliable. To achieve credible regional climate projections, it is essential to understand underlying physical processes, reduce model biases and evaluate their impact on projections, and adequately account for internal variability. In the tropics, where atmospheric internal variability is small compared with the forced change, advancing our understanding of the coupling between long-term changes in upper-ocean temperature and the atmospheric circulation will help most to narrow the uncertainty. In the extratropics, relatively large internal variability introduces substantial uncertainty, while exacerbating risks associated with extreme events. Large ensemble simulations are essential to estimate the probabilistic distribution of climate change on regional scales. Regional models inherit atmospheric circulation uncertainty from global models and do not automatically solve the problem of regional climate change. We conclude that the current priority is to understand and reduce uncertainties on scales greater than 100 km to aid assessments at finer scales.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
In this paper we study the problem of maximizing a quadratic form 〈Ax,x〉 subject to ‖x‖q=1, where A has matrix entries View the MathML source with i,j|k and q≥1. We investigate when the optimum is achieved at a ‘multiplicative’ point; i.e. where x1xmn=xmxn. This turns out to depend on both f and q, with a marked difference appearing as q varies between 1 and 2. We prove some partial results and conjecture that for f multiplicative such that 0
Resumo:
This work investigates the problem of feature selection in neuroimaging features from structural MRI brain images for the classification of subjects as healthy controls, suffering from Mild Cognitive Impairment or Alzheimer’s Disease. A Genetic Algorithm wrapper method for feature selection is adopted in conjunction with a Support Vector Machine classifier. In very large feature sets, feature selection is found to be redundant as the accuracy is often worsened when compared to an Support Vector Machine with no feature selection. However, when just the hippocampal subfields are used, feature selection shows a significant improvement of the classification accuracy. Three-class Support Vector Machines and two-class Support Vector Machines combined with weighted voting are also compared with the former and found more useful. The highest accuracy achieved at classifying the test data was 65.5% using a genetic algorithm for feature selection with a three-class Support Vector Machine classifier.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.