75 resultados para Deterministic walkers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the uplink achievable rates of massive multiple-input multiple-output (MIMO) antenna systems in Ricean fading channels, using maximal-ratio combining (MRC) and zero-forcing (ZF) receivers, assuming perfect and imperfect channel state information (CSI). In contrast to previous relevant works, the fast fading MIMO channel matrix is assumed to have an arbitrary-rank deterministic component as well as a Rayleigh-distributed random component. We derive tractable expressions for the achievable uplink rate in the large-antenna limit, along with approximating results that hold for any finite number of antennas. Based on these analytical results, we obtain the scaling law that the users' transmit power should satisfy, while maintaining a desirable quality of service. In particular, it is found that regardless of the Ricean K-factor, in the case of perfect CSI, the approximations converge to the same constant value as the exact results, as the number of base station antennas, M, grows large, while the transmit power of each user can be scaled down proportionally to 1/M. If CSI is estimated with uncertainty, the same result holds true but only when the Ricean K-factor is non-zero. Otherwise, if the channel experiences Rayleigh fading, we can only cut the transmit power of each user proportionally to 1/√M. In addition, we show that with an increasing Ricean K-factor, the uplink rates will converge to fixed values for both MRC and ZF receivers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a 3D human pose tracking framework is presented. A new dimensionality reduction method (Hierarchical Temporal Laplacian Eigenmaps) is introduced to represent activities in hierarchies of low dimensional spaces. Such a hierarchy provides increasing independence between limbs, allowing higher flexibility and adaptability that result in improved accuracy. Moreover, a novel deterministic optimisation method (Hierarchical Manifold Search) is applied to estimate efficiently the position of the corresponding body parts. Finally, evaluation on public datasets such as HumanEva demonstrates that our approach achieves a 62.5mm-65mm average joint error for the walking activity and outperforms state-of-the-art methods in terms of accuracy and computational cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reducible diffusions (RDs) are nonlinear transformations of analytically solvable Basic Diffusions (BDs). Hence, by construction RDs are analytically tractable and flexible diffusion processes. Existing literature on RDs has mostly focused on time-homogeneous transformations, which to a significant extent fail to explore the full potential of RDs from both theoretical and practical points of view. In this paper, we propose flexible and economically justifiable time variations to the transformations of RDs. Concentrating on the Constant Elasticity Variance (CEV) RDs, we consider nonlinear dynamics for our time-varying transformations with both deterministic and stochastic designs. Such time variations can greatly enhance the flexibility of RDs while maintaining sufficient tractability of the resulting models. In the meantime, our modeling approach enjoys the benefits of classical inferential techniques such as the Maximum Likelihood (ML). Our application to the UK and the US short-term interest rates suggests that from an empirical point of view time-varying transformations are highly relevant and statistically significant. We expect that the proposed models can describe more truthfully the dynamic time-varying behavior of economic and financial variables and potentially improve out-of-sample forecasts significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a fully-distributed self-healing algorithm DEX, that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders - whose expansion properties hold deterministically - that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion which rapidly degrade in a dynamic setting, in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only O(log n) rounds and O(log n) messages are needed with high probability (n is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table (DHT) on top of DEX, with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.

OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.

DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).

REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.

INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).

COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.

RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.

LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.

CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.

STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.

FUNDING: The National Institute for Health Research Health Technology Assessment programme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Question: How parallel is adaptive evolution when it occurs from different genetic backgrounds?
Background: Divergent evolutionary lineages of several post-glacial fish species including the threespine stickleback are found together in Ireland.
Goals: To investigate the morphological diversity of stickleback populations in Ireland and assess whether morphology evolved in parallel between evolutionary lineages.
Methods: We sampled stickleback from lake, river, and coastal habitats across Ireland. Microsatellite and mitochondrial DNA data revealed evolutionary history. Geometric morphometrics and linear trait measurements characterized morphology. We used a multivariate approach to quantify parallel and non-parallel divergence within and between lineages.
Results: Repeated evolution of similar morphologies in similar habitats occurred across Ireland, concordant with patterns observed elsewhere in the stickleback distribution. A strong pattern of habitat-specific morphology existed even among divergent lineages. Furthermore, a strong signal of shared morphological divergence occurred along a marine–freshwater axis. Evidently, deterministic natural selection played a more important role in driving freshwater adaptation than independent evolutionary history.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The demand for sustainable development has resulted in a rapid growth in wind power worldwide. Despite various approaches have been proposed to improve the accuracy and to overcome the uncertainties associated with traditional methods, the stochastic and variable nature of wind still remains the most challenging issue in accurately forecasting wind power. This paper presents a hybrid deterministic-probabilistic method where a temporally local ‘moving window’ technique is used in Gaussian Process to examine estimated forecasting errors. This temporally local Gaussian Process employs less measurement data while faster and better predicts wind power at two wind farms, one in the USA and the other in Ireland. Statistical analysis on the results shows that the method can substantially reduce the forecasting error while more likely generate Gaussian-distributed residuals, particularly for short-term forecast horizons due to its capability to handle the time-varying characteristics of wind power.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assimilation of discrete higher fidelity data points with model predictions can be used to achieve a reduction in the uncertainty of the model input parameters which generate accurate predictions. The problem investigated here involves the prediction of limit-cycle oscillations using a High-Dimensional Harmonic Balance method (HDHB). The efficiency of the HDHB method is exploited to enable calibration of structural input parameters using a Bayesian inference technique. Markov-chain Monte Carlo is employed to sample the posterior distributions. Parameter estimation is carried out on both a pitch/plunge aerofoil and Goland wing configuration. In both cases significant refinement was achieved in the distribution of possible structural parameters allowing better predictions of their
true deterministic values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The aim of this work was to examine, for amorphous solid dispersions, how the thermal analysis method selected impacts on the construction of thermodynamic phase diagrams, and to assess the predictive value of such phase diagrams in the selection of optimal, physically stable API-polymer compositions. Methods Thermodynamic phase diagrams for two API/polymer systems (naproxen/HPMC AS LF and naproxen/Kollidon 17 PF) were constructed from data collected using two different thermal analysis methods. The “dynamic” method involved heating the physical mixture at a rate of 1 &[deg]C/minute. In the "static" approach, samples were held at a temperature above the polymer Tg for prolonged periods, prior to scanning at 10 &[deg]C/minute. Subsequent to construction of phase diagrams, solid dispersions consisting of API-polymer compositions representative of different zones in the phase diagrams were spray dried and characterised using DSC, pXRD, TGA, FTIR, DVS and SEM. The stability of these systems was investigated under the following conditions: 25 &[deg]C, desiccated; 25 &[deg]C, 60 % RH; 40 &[deg]C, desiccated; 40 &[deg]C, 60 % RH. Results Endset depression occurred with increasing polymer volume fraction (Figure 1a). In conjunction with this data, Flory-Huggins and Gordon-Taylor theory were applied to construct thermodynamic phase diagrams (Figure 1b). The Flory-Huggins interaction parameter (&[chi]) for naproxen and HPMC AS LF was + 0.80 and + 0.72, for the dynamic and static methods respectively. For naproxen and Kollidon 17 PF, the dynamic data resulted in an interaction parameter of - 1.1 and the isothermal data produced a value of - 2.2. For both systems, the API appeared to be less soluble in the polymer when the dynamic approach was used. Stability studies of spray dried solid dispersions could be used as a means of validating the thermodynamic phase diagrams. Conclusion The thermal analysis method used to collate data has a deterministic effect on the phase diagram produced. This effect should be considered when constructing thermodynamic phase diagrams, as they can be a useful tool in predicting the stability of amorphous solid dispersions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radio-frequency (RF) impairments, which intimately exist in wireless communication systems, can severely limit the performance of multiple-input-multiple-output (MIMO) systems. Although we can resort to compensation schemes to mitigate some of these impairments, a certain amount of residual impairments always persists. In this paper, we consider a training-based point-to-point MIMO system with residual transmit RF impairments (RTRI) using spatial multiplexing transmission. Specifically, we derive a new linear channel estimator for the proposed model, and show that RTRI create an estimation error floor in the high signal-to-noise ratio (SNR) regime. Moreover, we derive closed-form expressions for the signal-to-noise-plus-interference ratio (SINR) distributions, along with analytical expressions for the ergodic achievable rates of zero-forcing, maximum ratio combining, and minimum mean-squared error receivers, respectively. In addition, we optimize the ergodic achievable rates with respect to the training sequence length and demonstrate that finite dimensional systems with RTRI generally require more training at high SNRs than those with ideal hardware. Finally, we extend our analysis to large-scale MIMO configurations, and derive deterministic equivalents of the ergodic achievable rates. It is shown that, by deploying large receive antenna arrays, the extra training requirements due to RTRI can be eliminated. In fact, with a sufficiently large number of receive antennas, systems with RTRI may even need less training than systems with ideal hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigated the problem of confined flow under dams and water retaining structuresusing stochastic modelling. The approach advocated in the study combined a finite elementsmethod based on the equation governing the dynamics of incompressible fluid flow through aporous medium with a random field generator that generates random hydraulic conductivity basedon lognormal probability distribution. The resulting model was then used to analyse confined flowunder a hydraulic structure. Cases for a structure provided with cutoff wall and when the wall didnot exist were both tested. Various statistical parameters that reflected different degrees ofheterogeneity were examined and the changes in the mean seepage flow, the mean uplift forceand the mean exit gradient observed under the structure were analysed. Results reveal that underheterogeneous conditions, the reduction made by the sheetpile in the uplift force and exit hydraulicgradient may be underestimated when deterministic solutions are used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Numerous experimental studies of damage in composite laminates have shown that intralaminar (in-plane) matrix cracks lead to interlaminar delamination (out-of-plane) at ply interfaces. The smearing of in-plane cracks over a volume, as a consequence of the use of continuum damage mechanics, does not always effectively capture the full extent of the interaction between the two failure mechanisms. A more accurate representation is obtained by adopting a discrete crack approach via the use of cohesive elements, for both in-plane and out-of-plane damage. The difficulty with cohesive elements is that their location must be determined a priori in order to generate the model; while ideally the position of the crack migration, and more generally the propagation path, should be obtained as part of the problem’s solution. With the aim of enhancing current modelling capabilities with truly predictive capabilities, a concept of automatic insertion of interface elements is utilized. The consideration of a simple traction criterion in relation to material strength, evaluated at each node of the model (or of the regions of the model where it is estimated cracks might form), allows for the determination of initial crack location and subsequent propagation by the insertion of cohesive elements during the course of the analysis. Several experimental results are modelled using the commercial package ABAQUS/Standard with an automatic insertion subroutine developed in this work, and the results are presented to demonstrate the capabilities of this technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Possibilistic answer set programming (PASP) unites answer set programming (ASP) and possibilistic logic (PL) by associating certainty values with rules. The resulting framework allows to combine both non-monotonic reasoning and reasoning under uncertainty in a single framework. While PASP has been well-studied for possibilistic definite and possibilistic normal programs, we argue that the current semantics of possibilistic disjunctive programs are not entirely satisfactory. The problem is twofold. First, the treatment of negation-as-failure in existing approaches follows an all-or-nothing scheme that is hard to match with the graded notion of proof underlying PASP. Second, we advocate that the notion of disjunction can be interpreted in several ways. In particular, in addition to the view of ordinary ASP where disjunctions are used to induce a non-deterministic choice, the possibilistic setting naturally leads to a more epistemic view of disjunction. In this paper, we propose a semantics for possibilistic disjunctive programs, discussing both views on disjunction. Extending our earlier work, we interpret such programs as sets of constraints on possibility distributions, whose least specific solutions correspond to answer sets.