47 resultados para convex subgraphs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to Grivaux, the group GL(X) of invertible linear operators on a separable infinite dimensional Banach space X acts transitively on the set s (X) of countable dense linearly independent subsets of X. As a consequence, each A? s (X) is an orbit of a hypercyclic operator on X. Furthermore, every countably dimensional normed space supports a hypercyclic operator. Recently Albanese extended this result to Fréchet spaces supporting a continuous norm. We show that for a separable infinite dimensional Fréchet space X, GL(X) acts transitively on s (X) if and only if X possesses a continuous norm. We also prove that every countably dimensional metrizable locally convex space supports a hypercyclic operator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Healing algorithms play a crucial part in distributed peer-to-peer networks where failures occur continuously and frequently. Whereas there are approaches for robustness that rely largely on built-in redundancy, we adopt a responsive approach that is more akin to that of biological networks e.g. the brain. The general goal of self-healing distributed graphs is to maintain certain network properties while recovering from failure quickly and making bounded alterations locally. Several self-healing algorithms have been suggested in the recent literature [IPDPS'08, PODC'08, PODC'09, PODC'11]; they heal various network properties while fulfilling competing requirements such as having low degree increase while maintaining connectivity, expansion and low stretch of the network. In this work, we augment the previous algorithms by adding the notion of edge-preserving self-healing which requires the healing algorithm to not delete any edges originally present or adversarialy inserted. This reflects the cost of adding additional edges but more importantly it immediately follows that edge preservation helps maintain any subgraph induced property that is monotonic, in particular important properties such as graph and subgraph densities. Density is an important network property and in certain distributed networks, maintaining it preserves high connectivity among certain subgraphs and backbones. We introduce a general model of self-healing, and introduce xheal+, an edge-preserving version of xheal[PODC'11]. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Herein we report the intra- and inter-molecular assembly of a {V5O9} subunit. This mixed-valent structural motif can be stabilised as [V5O9(L1–3)4]5−/9− (1–3) by a range of organoarsonate ligands (L1–L3) whose secondary functionalities influence its packing arrangement within the crystal structures. Variation of the reaction conditions results in the dodecanuclear cage structure [V12O14(OH)4(L1)10]4− (4) where two modified convex building units are linked via two dimeric {O4VIV(OH)2VIVO4} moieties. Bi-functional phosphonate ligands, L4–L6 allow the intramolecular connectivity of the {V5O9} subunit to give hybrid capsules [V10O18(L4–6)4]10− (5–7). The dimensions of the electrophilic cavities of the capsular entities are determined by the incorporated ligand type. Mass spectrometry experiments confirm the stability of the complexes in solution. We investigate and model the temperature-dependent magnetic properties of representative complexes 1, 4, 6 and 7 and provide preliminary cell-viability studies of three different cancer cell lines with respect to Na8H2[6]·36H2O and Na8H2[7]·2DMF·29H2O.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context. Comet 67P/Churyumov-Gerasimenko is the target of the European Space Agency Rosetta spacecraft rendez-vous mission. Detailed physical characteristation of the comet before arrival is important for mission planning as well as providing a test bed for ground-based observing and data-analysis methods. Aims: To conduct a long-term observational programme to characterize the physical properties of the nucleus of the comet, via ground-based optical photometry, and to combine our new data with all available nucleus data from the literature. Methods: We applied aperture photometry techniques on our imaging data and combined the extracted rotational lightcurves with data from the literature. Optical lightcurve inversion techniques were applied to constrain the spin state of the nucleus and its broad shape. We performed a detailed surface thermal analysis with the shape model and optical photometry by incorporating both into the new Advanced Thermophysical Model (ATPM), along with all available Spitzer 8-24 μm thermal-IR flux measurements from the literature. Results: A convex triangular-facet shape model was determined with axial ratios b/a = 1.239 and c/a = 0.819. These values can vary by as much as 7% in each axis and still result in a statistically significant fit to the observational data. Our best spin state solution has Psid = 12.76137 ± 0.00006 h, and a rotational pole orientated at Ecliptic coordinates λ = 78°(±10°), β = + 58°(±10°). The nucleus phase darkening behaviour was measured and best characterized using the IAU HG system. Best fit parameters are: G = 0.11 ± 0.12 and HR(1,1,0) = 15.31 ± 0.07. Our shape model combined with the ATPM can satisfactorily reconcile all optical and thermal-IR data, with the fit to the Spitzer 24 μm data taken in February 2004 being exceptionally good. We derive a range of mutually-consistent physical parameters for each thermal-IR data set, including effective radius, geometric albedo, surface thermal inertia and roughness fraction. Conclusions: The overall nucleus dimensions are well constrained and strongly imply a broad nucleus shape more akin to comet 9P/Tempel 1, rather than the highly elongated or "bi-lobed" nuclei seen for comets 103P/Hartley 2 or 8P/Tuttle. The derived low thermal inertia of

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: The present work was planned to report the incidence of calcification and ossification of an isolated cranial dural fold. The form, degree of severity and range of extension of such changes will be described. Involvement of the neighboring brain tissue and blood vessels, whether meningeal or cerebral, will also be determined. The results of this study might highlight the occasional incidence of intracranial calcification and ossification in images of the head and their interpretation, by radiologists and neurologists, to be of dural or vascular origin.

METHODS: Two human formalin-fixed cadavers, one middle-aged female another older male, were investigated at the Anatomy Laboratory, College of Medicine, King Faisal University, Dammam, Kingdom of Saudi Arabia during the period from 2000 to 2003. In each cadaver, the skullcap was removed and the convexity of the cranial dura mater, as well as the individual dural folds, were carefully examined for any calcification or ossification. The meningeal and cerebral blood vessels together with the underlying brain were grossly inspected for such structural changes. Calcified or ossified tissues, when identified, were subjected to histological examination to confirm their construction.

RESULTS: The female cadaver showed a calcified parietal emissary vein piercing the skullcap and projecting into the scalp. The latter looked paler and deficient in hair on its right side. The base of the stump was surrounded by a granular patch of calcification. The upper convex border of the falx cerebri was hardened and it presented granules, plaques and a cauliflower mass, which all proved to be osseous in structure. The meningeal and right cerebral vessels were mottled with calcium granules. The underlying temporal and parietal lobes of the right cerebral hemisphere were degenerated. The male cadaver also revealed a calcified upper border of the falx cerebri and superior sagittal sinus. Osseous granules and plaques, similar to those of the first specimen, were also identified but without gross changes in the underlying brain.

CONCLUSION: Calcification or ossification of an isolated site of the cranial dura mater and the intracranial blood vessels might occur. These changes should be kept in mind while interpreting images of the skull and brain. Clinical assessment and laboratory investigations are required to determine whether these changes are idiopathic, traumatic, or as a manifestation of a generalized disease such as hyperparathyroidism, vitamin D-intoxication, or chronic renal failure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Credal networks relax the precise probability requirement of Bayesian networks, enabling a richer representation of uncertainty in the form of closed convex sets of probability measures. The increase in expressiveness comes at the expense of higher computational costs. In this paper, we present a new variable elimination algorithm for exactly computing posterior inferences in extensively specified credal networks, which is empirically shown to outperform a state-of-the-art algorithm. The algorithm is then turned into a provably good approximation scheme, that is, a procedure that for any input is guaranteed to return a solution not worse than the optimum by a given factor. Remarkably, we show that when the networks have bounded treewidth and bounded number of states per variable the approximation algorithm runs in time polynomial in the input size and in the inverse of the error factor, thus being the first known fully polynomial-time approximation scheme for inference in credal networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economic and environmental load dispatch aims to determine the amount of electricity generated from power plants to meet load demand while minimizing fossil fuel costs and air pollution emissions subject to operational and licensing requirements. These two scheduling problems are commonly formulated with non-smooth cost functions respectively considering various effects and constraints, such as the valve point effect, power balance and ramp rate limits. The expected increase in plug-in electric vehicles is likely to see a significant impact on the power system due to high charging power consumption and significant uncertainty in charging times. In this paper, multiple electric vehicle charging profiles are comparatively integrated into a 24-hour load demand in an economic and environment dispatch model. Self-learning teaching-learning based optimization (TLBO) is employed to solve the non-convex non-linear dispatch problems. Numerical results on well-known benchmark functions, as well as test systems with different scales of generation units show the significance of the new scheduling method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economic dispatch (ED) problems often exhibit non-linear, non-convex characteristics due to the valve point effects. Further, various constraints and factors, such as prohibited operation zones, ramp rate limits and security constraints imposed by the generating units, and power loss in transmission make it even more challenging to obtain the global optimum using conventional mathematical methods. Meta-heuristic approaches are capable of solving non-linear, non-continuous and non-convex problems effectively as they impose no requirements on the optimization problems. However, most methods reported so far mainly focus on a specific type of ED problems, such as static or dynamic ED problems. This paper proposes a hybrid harmony search with arithmetic crossover operation, namely ACHS, for solving five different types of ED problems, including static ED with valve point effects, ED with prohibited operating zones, ED considering multiple fuel cells, combined heat and power ED, and dynamic ED. In this proposed ACHS, the global best information and arithmetic crossover are used to update the newly generated solution and speed up the convergence, which contributes to the algorithm exploitation capability. To balance the exploitation and exploration capabilities, the opposition based learning (OBL) strategy is employed to enhance the diversity of solutions. Further, four commonly used crossover operators are also investigated, and the arithmetic crossover shows its efficiency than the others when they are incorporated into HS. To make a comprehensive study on its scalability, ACHS is first tested on a group of benchmark functions with a 100 dimensions and compared with several state-of-the-art methods. Then it is used to solve seven different ED cases and compared with the results reported in literatures. All the results confirm the superiority of the ACHS for different optimization problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 1974, pursuing his interest in the infra-ordinary – ‘the banal, the quotidian, the obvious, the common, the ordinary, the back-ground noise, the habitual’ – Georges Perec wrote about an idea for a novel:
‘I imagine a Parisian apartment building whose façade has been removed … so that all the rooms in the front, from the ground floor up to the attics, are instantly and simultaneously visible’.
In Life A User’s Manual (1978) the consummation of this precis, patterns of existence are measured within architectural space with an archaeological sensibility that sifts through narrative and décor, structure and history, services and emotion, the personal and the system, ascribing commensurate value to each.
Apartment comes from the Italian appartare meaning ‘to separate’. The space of the boundary between activities is reduced to a series of intimately thin lines: the depth of a floor, a party wall, a window, the convex peep-hole in a door, or the façade that Perec seeks to render invisible. The apartness of the apartment is accelerated when aligned with short-term tenancies. Here Perec’s interweaving of personal histories over time using the structure of the block, gives way to convivialities of detachment: inhabitants are temporary, their personalities anonymous, their activities unknown or overlooked.
Borrowing methods from Perec, to move somewhere between conjecture, analysis and other documentation and tracing relationships between form, structure, materiality, technology, organisation, tenure and narrative use, this paper interrogates the late twentieth-century speculative apartment block in Britain and Ireland arguing that its speculative and commodified purpose allows a series of lives that are often less than ordinary to inhabit its spaces.
Henri Lefebvre described the emergence of an ‘abstract space’ under capitalism in terms which can be applied to the apartment building: the division of space into freely alienable privatised parcels which can be exchanged. Vertical distributions of class and other new, contiguous social and spatial relationships are couched within a paradox: the building which allows such proximities is also a conductor of division. Apartment comes from the Italian appartare meaning ‘to separate’. The space of the boundary between activities is reduced to a series of intimately thin lines: the depth of a floor, a party wall, a window, the convex peep-hole in a door, or the façade that Perec seeks to render invisible. The apartness of the apartment is accelerated when aligned with short-term tenancies. Here Perec’s interweaving of personal histories over time using the structure of the block, gives way to convivialities of detachment: inhabitants are temporary, their personalities anonymous, their activities unknown or overlooked.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose and advocate basic principles for the fusion of incomplete or uncertain information items, that should apply regardless of the formalism adopted for representing pieces of information coming from several sources. This formalism can be based on sets, logic, partial orders, possibility theory, belief functions or imprecise probabilities. We propose a general notion of information item representing incomplete or uncertain information about the values of an entity of interest. It is supposed to rank such values in terms of relative plausibility, and explicitly point out impossible values. Basic issues affecting the results of the fusion process, such as relative information content and consistency of information items, as well as their mutual consistency, are discussed. For each representation setting, we present fusion rules that obey our principles, and compare them to postulates specific to the representation proposed in the past. In the crudest (Boolean) representation setting (using a set of possible values), we show that the understanding of the set in terms of most plausible values, or in terms of non-impossible ones matters for choosing a relevant fusion rule. Especially, in the latter case our principles justify the method of maximal consistent subsets, while the former is related to the fusion of logical bases. Then we consider several formal settings for incomplete or uncertain information items, where our postulates are instantiated: plausibility orderings, qualitative and quantitative possibility distributions, belief functions and convex sets of probabilities. The aim of this paper is to provide a unified picture of fusion rules across various uncertainty representation settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.