15 resultados para Approximate Model Checking
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
When implementing autonomic management of multiple non-functional concerns a trade-off must be found between the ability to develop independently management of the individual concerns (following the separation of concerns principle) and the detection and resolution of conflicts that may arise when combining the independently developed management code. Here we discuss strategies to establish this trade-off and introduce a model checking based methodology aimed at simplifying the discovery and handling of conflicts arising from deployment-within the same parallel application-of independently developed management policies. Preliminary results are shown demonstrating the feasibility of the approach.
Resumo:
Brown's model for the relaxation of the magnetization of a single domain ferromagnetic particle is considered. This model results in the Fokker-Planck equation of the process. The solution of this equation in the cases of most interest is non- trivial. The probability density of orientations of the magnetization in the Fokker-Planck equation can be expanded in terms of an infinite set of eigenfunctions and their corresponding eigenvalues where these obey a Sturm-Liouville type equation. A variational principle is applied to the solution of this equation in the case of an axially symmetric potential. The first (non-zero) eigenvalue, corresponding to the largest time constant, is considered. From this we obtain two new results. Firstly, an approximate minimising trial function is obtained which allows calculation of a rigorous upper bound. Secondly, a new upper bound formula is derived based on the Euler-Lagrange condition. This leads to very accurate calculation of the eigenvalue but also, interestingly, from this, use of the simplest trial function yields an equivalent result to the correlation time of Coffey et at. and the integral relaxation time of Garanin. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Wideband far infrared (FIR) spectra of complex permittivity e(p) of ice are calculated in terms of a simple analytical theory based on the method of dipolar autocorrelation functions. The molecular model represents a revision of the model recently presented for liquid water in Adv. Chem. Phys. 127 (2003) 65. A composite two-fractional model is proposed. The model is characterised by three phenomenological potential wells corresponding to the three FIR bands observed in ice. The first fraction comprises dipoles reorienting in a rather narrow and deep hat-like well; these dipoles generate the librational band centred at the frequency approximate to 880 cm(-1). The second fraction comprises elastically interacting particles; they generate two nearby bands placed around frequency 200 cm(-1). For description of one of these bands the harmonic oscillator (HO) model is used, in which translational oscillations of two charged molecules along the H-bond are considered. The other band is produced by the H-bond stretch, which governs hindered rotation of a rigid dipole. Such a motion and its dielectric response are described in terms of a new cut parabolic (CP) model applicable for any vibration amplitude. The composite hat-HO-CP model results in a smooth epsilon(nu) ice spectrum, which does not resemble the noise-like spectra of ice met in the known literature. The proposed theory satisfactorily agrees with the experimental ice spectrum measured at - 7 degrees C. The calculated longitudinal optic-transverse optic (LO-TO) splitting occurring at approximate to 250 cm(-1) qualitatively agrees with the measured data. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Simple analytical formulas are introduced for the grid impedance of electrically dense arrays of square patches and for the surface impedance of high-impedance surfaces based on the dense arrays of metal strips or square patches over ground planes. Emphasis is on the oblique-incidence excitation. The approach is based on the known analytical models for strip grids combined with the approximate Babinet principle for planar grids located at a dielectric interface. Analytical expressions for the surface impedance and reflection coefficient resulting from our analysis are thoroughly verified by full-wave simulations and compared with available data in open literature for particular cases. The results can be used in the design of various antennas and microwave or millimeter wave devices which use artificial impedance surfaces and artificial magnetic conductors (reflect-array antennas, tunable phase shifters, etc.), as well as for the derivation of accurate higher-order impedance boundary conditions for artificial (high-) impedance surfaces. As an example, the propagation properties of surface waves along the high-impedance surfaces are studied.
Resumo:
The temporal fluctuation of the average slope of a ricepile model is investigated. It is found that the power spectrum S(f) scales as 1/f(alpha) with alpha approximate to 1.3 when grains of rice are added only to one end of the pile. If grains are randomly added to the pile, the power spectrum exhibits 1/f(2) behavior. The profile fluctuations of the pile under different driving mechanisms are also discussed.
Resumo:
Shape corrections to the standard approximate Kohn-Sham exchange-correlation (xc) potentials are considered with the aim to improve the excitation energies (especially for higher excitations) calculated with time-dependent density functional perturbation theory. A scheme of gradient-regulated connection (GRAC) of inner to outer parts of a model potential is developed. Asymptotic corrections based either on the potential of Fermi and Amaldi or van Leeuwen and Baerends (LB) are seamlessly connected to the (shifted) xc potential of Becke and Perdew (BP) with the GRAC procedure, and are employed to calculate the vertical excitation energies of the prototype molecules N-2, CO, CH2O, C2H4, C5NH5, C6H6, Li-2, Na-2, K-2. The results are compared with those of the alternative interpolation scheme of Tozer and Handy as well as with the results of the potential obtained with the statistical averaging of (model) orbital potentials. Various asymptotically corrected potentials produce high quality excitation energies, which in quite a few cases approach the benchmark accuracy of 0.1 eV for the electronic spectra. Based on these results, the potential BP-GRAC-LB is proposed for molecular response calculations, which is a smooth potential and a genuine "local" density functional with an analytical representation. (C) 2001 American Institute of Physics.
Resumo:
Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002; Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part-of-speech-tagged corpus. Concepts are characterized by weighted properties, enriched with concept-property types that approximate classical relations such as hypernymy and function. Our model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures (discovering properties of concepts, grouping properties by property type) but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach. Copyright © 2009 Cognitive Science Society, Inc. All rights reserved.
Resumo:
We study the long-range quantum correlations in the anisotropic XY model. By first examining the thermodynamic limit, we show that employing the quantum discord as a figure of merit allows one to capture the main features of the model at zero temperature. Furthermore, by considering suitably large site separations we find that these correlations obey a simple scaling behavior for finite temperatures, allowing for efficient estimation of the critical point. We also address ground-state factorization of this model by explicitly considering finite-size systems, showing its relation to the energy spectrum and explaining the persistence of the phenomenon at finite temperatures. Finally, we compute the fidelity between finite and infinite systems in order to show that remarkably small system sizes can closely approximate the thermodynamic limit.
Resumo:
A commercial polymeric film (Parafilm M (R), a blend of a hydrocarbon wax and a polyolefin) was evaluated as a model membrane for microneedle (MN) insertion studies. Polymeric MN arrays were inserted into Parafilm M (R) (PF) and also into excised neonatal porcine skin. Parafilm M (R) was folded before the insertions to closely approximate thickness of the excised skin. Insertion depths were evaluated using optical coherence tomography (OCT) using either a force applied by a Texture Analyser or by a group of human volunteers. The obtained insertion depths were, in general, slightly lower, especially for higher forces, for PF than for skin. However, this difference was not a large, being less than the 10% of the needle length. Therefore, all these data indicate that this model membrane could be a good alternative to biological tissue for MN insertion studies. As an alternative method to OCT, light microscopy was used to evaluate the insertion depths of MN in the model membrane. This provided a rapid, simple method to compare different MN formulations. The use of Parafilm M (R), in conjunction with a standardised force/time profile applied by a Texture Analyser, could provide the basis for a rapid MN quality control test suitable for in-process use. It could also be used as a comparative test of insertion efficiency between candidate MN formulations.
Resumo:
Approximate execution is a viable technique for energy-con\-strained environments, provided that applications have the mechanisms to produce outputs of the highest possible quality within the given energy budget.
We introduce a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows users to express the relative importance of computations for the quality of the end result, as well as minimum quality requirements. The significance-aware runtime system uses an application-specific analytical energy model to identify the degree of concurrency and approximation that maximizes quality while meeting user-specified energy constraints. Evaluation on a dual-socket 8-core server shows that the proposed
framework predicts the optimal configuration with high accuracy, enabling energy-constrained executions that result in significantly higher quality compared to loop perforation, a compiler approximation technique.
Resumo:
We introduce a task-based programming model and runtime system that exploit the observation that not all parts of a program are equally significant for the accuracy of the end-result, in order to trade off the quality of program outputs for increased energy-efficiency. This is done in a structured and flexible way, allowing for easy exploitation of different points in the quality/energy space, without adversely affecting application performance. The runtime system can apply a number of different policies to decide whether it will execute less-significant tasks accurately or approximately.
The experimental evaluation indicates that our system can achieve an energy reduction of up to 83% compared with a fully accurate execution and up to 35% compared with an approximate version employing loop perforation. At the same time, our approach always results in graceful quality degradation.
Resumo:
We present TANC, a TAN classifier (tree-augmented naive) based on imprecise probabilities. TANC models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM). A first contribution of this paper is the experimental comparison between EDM and the global Imprecise Dirichlet Model using the naive credal classifier (NCC), with the aim of showing that EDM is a sensible approximation of the global IDM. TANC is able to deal with missing data in a conservative manner by considering all possible completions (without assuming them to be missing-at-random), but avoiding an exponential increase of the computational time. By experiments on real data sets, we show that TANC is more reliable than the Bayesian TAN and that it provides better performance compared to previous TANs based on imprecise probabilities. Yet, TANC is sometimes outperformed by NCC because the learned TAN structures are too complex; this calls for novel algorithms for learning the TAN structures, better suited for an imprecise probability classifier.
Resumo:
As an alternative to externally bonded FRP reinforcement, near-surface mounted (NSM) FRP reinforcement can be used to effectively improve the flexural performance of RC beams. In such FRP strengthened RC beams, end cover separation failure is one of the common failure modes. This failuremode involves the detachment of the NSM FRP reinforcement together with the concrete cover along the level of the tension steel reinforcement. This paper presents a new strength model for end cover separation failure in RC beams strengthened in flexure with NSM FRP strips (i.e. rectangular FRP bars with asectional height-to-thickness ratio not less than 5), which was formulated on the basis of extensive numerical results from a parametric study undertaken using an efficient finite element approach. The proposed strength model consists of an approximate equation for the debonding strain of the FRP reinforcement at the critical cracked section and a conventional section analysis to relate this debondingstrain to the moment acting on the same section (i.e. the debonding strain). Once the debonding strain is known, the load level at end cover separation of an FRP-strengthened RC beam can be easily determined for a given load distribution. Predictions from the proposed strength model are compared with those of two existing strength models of the same type and available test results, which shows that the proposed strength model is in close agreement with test results and is far more accurate than the existing strength models.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.