998 resultados para Mild solution


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a natural norm associated with a starting point of the homogeneous self-dual (HSD) embedding model for conic convex optimization. In this norm two measures of the HSD model’s behavior are precisely controlled independent of the problem instance: (i) the sizes of ε-optimal solutions, and (ii) the maximum distance of ε-optimal solutions to the boundary of the cone of the HSD variables. This norm is also useful in developing a stopping-rule theory for HSD-based interior-point methods such as SeDuMi. Under mild assumptions, we show that a standard stopping rule implicitly involves the sum of the sizes of the ε-optimal primal and dual solutions, as well as the size of the initial primal and dual infeasibility residuals. This theory suggests possible criteria for developing starting points for the homogeneous self-dual model that might improve the resulting solution time in practice

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How much information about the shape of an object can be inferred from its image? In particular, can the shape of an object be reconstructed by measuring the light it reflects from points on its surface? These questions were raised by Horn [HO70] who formulated a set of conditions such that the image formation can be described in terms of a first order partial differential equation, the image irradiance equation. In general, an image irradiance equation has infinitely many solutions. Thus constraints necessary to find a unique solution need to be identified. First we study the continuous image irradiance equation. It is demonstrated when and how the knowledge of the position of edges on a surface can be used to reconstruct the surface. Furthermore we show how much about the shape of a surface can be deduced from so called singular points. At these points the surface orientation is uniquely determined by the measured brightness. Then we investigate images in which certain types of silhouettes, which we call b-silhouettes, can be detected. In particular we answer the following question in the affirmative: Is there a set of constraints which assure that if an image irradiance equation has a solution, it is unique? To this end we postulate three constraints upon the image irradiance equation and prove that they are sufficient to uniquely reconstruct the surface from its image. Furthermore it is shown that any two of these constraints are insufficient to assure a unique solution to an image irradiance equation. Examples are given which illustrate the different issues. Finally, an overview of known numerical methods for computing solutions to an image irradiance equation are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel process is developed in this paper for utilizing the coalmine-drained methane gas that is usually vented straight into the atmosphere in most coalmines worldwide. It is expected that low-cost syngas can be produced by the combined air partial oxidation and CO2 reforming of methane, because this process utilizes directly the methane, air, and carbon dioxide in the coalmine-drained gas without going through the separation step. For this purpose, a nickel-magnesia solid solution catalyst was prepared and its catalytic performance for the proposed process was investigated. It was found that calcination temperature has significant influence on the catalytic performance due to the different extent of solid solution formation in the catalysts. A uniform nickel-magnesia solid solution catalyst exhibits higher stability than the catalysts in which NiO has not completely formed solid solution with MgO. Its catalytic activity and selectivity remain stable during 120 h of reaction. The product H-2/CO ratio is mainly dependent on the feed gas composition. By changing CO2/air ratio of the feed gases, syngas with a H-2/CO ratio between 1 and 1.9 can be obtained. The influences of reaction temperature and nickel loading on the catalytic performance were also investigated. (c) 2004 Elsevier B.V All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A mild and efficient copper-catalyzed system for N-arylation of alkylamines and N-H heterocycles with aryl iodides using a novel, readily prepared and highly stable oxime-functionalized phosphine oxide ligand was developed. The coupling reactions could even be performed in solvent-free conditions with moderate to good yields. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cobalt boride precursors were synthesized via chemical reaction of aqueous sodium borohydride with cobalt chloride, and followed by heat-treating at various temperatures. The as-prepared Co-B catalysts were characterized and analyzed by X-ray diffraction (XRD), nitrogen adsorption-desorption and catalytic activity test; and were adopted to help accelerating hydrolysis reaction of NaBH4 alkaline solution. The Co-B catalyst treated at 500 degrees C exhibits the best catalytic activity, and achieves an average H, generation rate of 2970 ml/min/g, which may give a successive H, supply for a 481 W proton exchange membrane fuel cell (PEMFC) at 100% H-2 utilization. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is in the interests of everybody that the environment is protected. In view of the recent leaps in environmental awareness it would seem timely and sensible, therefore, for people to pool vehicle resources to minimise the damaging impact of emissions. However, this is often contrary to how complex social systems behave – local decisions made by self-interested individuals often have emergent effects that are in the interests of nobody. For software engineers a major challenge is to help facilitate individual decision-making such that individual preferences can be met, which, when accumulated, minimise adverse effects at the level of the transport system. We introduce this general problem through a concrete example based on vehicle-sharing. Firstly, we outline the kind of complex transportation problem that is directly addressed by our technology (CO2y™ - pronounced “cosy”), and also show how this differs from other more basic software solutions. The CO2y™ architecture is then briefly introduced. We outline the practical advantages of the advanced, intelligent software technology that is designed to satisfy a number of individual preference criteria and thereby find appropriate matches within a population of vehicle-share users. An example scenario of use is put forward, i.e., minimisation of grey-fleets within a medium-sized company. Here we comment on some of the underlying assumptions of the scenario, and how in a detailed real-world situation such assumptions might differ between different companies, and individual users. Finally, we summarise the paper, and conclude by outlining how the problem of pooled transportation is likely to benefit from the further application of emergent, nature-inspired computing technologies. These technologies allow systems-level behaviour to be optimised with explicit representation of individual actors. With these techniques we hope to make real progress in facing the complexity challenges that transportation problems produce.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mishuris, G; Kuhn, G., (2001) 'Asymptotic behaviour of the elastic solution near the tip of a crack situated at a nonideal interface', Zeitschrift f?r Angewandte Mathematik und Mechanik 81(12) pp.811-826 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hybridization kinetics for a series of designed 25mer probe�target pairs having varying degrees of secondary structure have been measured by UV absorbance and surface plasmon resonance (SPR) spectroscopy in solution and on the surface, respectively. Kinetic rate constants derived from the resultant data decrease with increasing probe and target secondary structure similarly in both solution and surface environments. Specifically, addition of three intramolecular base pairs in the probe and target structure slow hybridization by a factor of two. For individual strands containing four or more intramolecular base pairs, hybridization cannot be described by a traditional two-state model in solution-phase nor on the surface. Surface hybridization rates are also 20- to 40-fold slower than solution-phase rates for identical sequences and conditions. These quantitative findings may have implications for the design of better biosensors, particularly those using probes with deliberate secondary structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Border Gateway Protocol (BGP) is an interdomain routing protocol that allows each Autonomous System (AS) to define its own routing policies independently and use them to select the best routes. By means of policies, ASes are able to prevent some traffic from accessing their resources, or direct their traffic to a preferred route. However, this flexibility comes at the expense of a possibility of divergence behavior because of mutually conflicting policies. Since BGP is not guaranteed to converge even in the absence of network topology changes, it is not safe. In this paper, we propose a randomized approach to providing safety in BGP. The proposed algorithm dynamically detects policy conflicts, and tries to eliminate the conflict by changing the local preference of the paths involved. Both the detection and elimination of policy conflicts are performed locally, i.e. by using only local information. Randomization is introduced to prevent synchronous updates of the local preferences of the paths involved in the same conflict.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A neural network model of 3-D visual perception and figure-ground separation by visual cortex is introduced. The theory provides a unified explanation of how a 2-D image may generate a 3-D percept; how figures pop-out from cluttered backgrounds; how spatially sparse disparity cues can generate continuous surface representations at different perceived depths; how representations of occluded regions can be completed and recognized without usually being seen; how occluded regions can sometimes be seen during percepts of transparency; how high spatial frequency parts of an image may appear closer than low spatial frequency parts; how sharp targets are detected better against a figure and blurred targets are detector better against a background; how low spatial frequency parts of an image may be fused while high spatial frequency parts are rivalrous; how sparse blue cones can generate vivid blue surface percepts; how 3-D neon color spreading, visual phantoms, and tissue contrast percepts are generated; how conjunctions of color-and-depth may rapidly pop-out during visual search. These explanations arise derived from an ecological analysis of how monocularly viewed parts of an image inherit the appropriate depth from contiguous binocularly viewed parts, as during DaVinci stereopsis. The model predicts the functional role and ordering of multiple interactions within and between the two parvocellular processing streams that join LGN to prestriate area V4. Interactions from cells representing larger scales and disparities to cells representing smaller scales and disparities are of particular importance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The insider threat is a security problem that is well-known and has a long history, yet it still remains an invisible enemy. Insiders know the security processes and have accesses that allow them to easily cover their tracks. In recent years the idea of monitoring separately for these threats has come into its own. However, the tools currently in use have disadvantages and one of the most effective techniques of human review is costly. This paper explores the development of an intelligent agent that uses already in-place computing material for inference as an inexpensive monitoring tool for insider threats. Design Science Research (DSR) is a methodology used to explore and develop an IT artifact, such as for this intelligent agent research. This methodology allows for a structure that can guide a deep search method for problems that may not be possible to solve or could add to a phenomenological instantiation.