984 resultados para Linear Optimization
Resumo:
Black-box optimization problems (BBOP) are de ned as those optimization problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program). This paper is focussed on BBOPs that arise in the eld of insurance, and more speci cally in reinsurance problems. In this area, the complexity of the models and assumptions considered to de ne the reinsurance rules and conditions produces hard black-box optimization problems, that must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in BBOP, so new computational paradigms must be applied to solve these problems. In this paper we show the performance of two evolutionary-based techniques (Evolutionary Programming and Particle Swarm Optimization). We provide an analysis in three BBOP in reinsurance, where the evolutionary-based approaches exhibit an excellent behaviour, nding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.
Resumo:
Linear response functions are implemented for a vibrational configuration interaction state allowing accurate analytical calculations of pure vibrational contributions to dynamical polarizabilities. Sample calculations are presented for the pure vibrational contributions to the polarizabilities of water and formaldehyde. We discuss the convergence of the results with respect to various details of the vibrational wave function description as well as the potential and property surfaces. We also analyze the frequency dependence of the linear response function and the effect of accounting phenomenologically for the finite lifetime of the excited vibrational states. Finally, we compare the analytical response approach to a sum-over-states approach
Resumo:
A variational approach for reliably calculating vibrational linear and nonlinear optical properties of molecules with large electrical and/or mechanical anharmonicity is introduced. This approach utilizes a self-consistent solution of the vibrational Schrödinger equation for the complete field-dependent potential-energy surface and, then, adds higher-level vibrational correlation corrections as desired. An initial application is made to static properties for three molecules of widely varying anharmonicity using the lowest-level vibrational correlation treatment (i.e., vibrational Møller-Plesset perturbation theory). Our results indicate when the conventional Bishop-Kirtman perturbation method can be expected to break down and when high-level vibrational correlation methods are likely to be required. Future improvements and extensions are discussed
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
A basic prerequisite for in vivo X-ray imaging of the lung is the exact determination of radiation dose. Achieving resolutions of the order of micrometres may become particularly challenging owing to increased dose, which in the worst case can be lethal for the imaged animal model. A framework for linking image quality to radiation dose in order to optimize experimental parameters with respect to dose reduction is presented. The approach may find application for current and future in vivo studies to facilitate proper experiment planning and radiation risk assessment on the one hand and exploit imaging capabilities on the other.
Resumo:
America’s roadways are in serious need of repair. According to the American Society of Civil Engineers (ASCE), one-third of the nation’s roads are in poor or mediocre condition. ASCE has estimated that under these circumstances American drivers will sacrifice $5.8 billion and as many as 13,800 fatalities a year from 1999 to 2001 ( 1). A large factor in the deterioration of these roads is a result of how well the steel reinforcement transfers loads across the concrete slabs. Fabricating this reinforcement using a shape conducive to transferring these loads will help to aid in minimizing roadway damage. Load transfer within a series of concrete slabs takes place across the joints. For a typical concrete paved road, these joints are approximately 1/8-inch gaps between two adjacent slabs. Dowel bars are located at these joints and used to transfer load from one slab to its adjacent slabs. As long as the dowel bar is completely surrounded by concrete no problems will occur. However, when the hole starts to oblong a void space is created and difficulties can arise. This void space is formed due to a stress concentration where the dowel contacts the concrete. Over time, the repeated process of traffic traveling over the joint crushes the concrete surrounding the dowel bar and causes a void in the concrete. This void inhibits the dowel’s ability to effectively transfer load across the joint. Furthermore, this void gives water and other particles a place to collect that will eventually corrode and potentially bind or lock the joint so that no thermal expansion is allowed. Once there is no longer load transferred across the joint, the load is transferred to the foundation and differential settlement of the adjacent slabs will occur.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
America’s roadways are in serious need of repair. According to the American Society of Civil Engineers (ASCE), one-third of the nation’s roads are in poor or mediocre condition (1). ASCE has estimated that under these circumstances American drivers will sacrifice $5.8 billion and as many as 13,800 fatalities a year from 1999 to 2001 ( 1). A large factor in the deterioration of these roads is a result of how well the steel reinforcement transfers loads across the concrete slabs. Fabricating this reinforcement using a shape conducive to transferring these loads will help to aid in minimizing roadway damage. Load transfer within a series of concrete slabs takes place across the joints. For a typical concrete paved road, these joints are approximately 1/8-inch gaps between two adjacent slabs. Dowel bars are located at these joints and used to transfer load from one slab to its adjacent slabs. As long as the dowel bar is completely surrounded by concrete no problems will occur. However, when the hole starts to oblong a void space is created and difficulties can arise. This void space is formed due to a stress concentration where the dowel contacts the concrete. Over time, the repeated process of traffic traveling over the joint crushes the concrete surrounding the dowel bar and causes a void in the concrete. This void inhibits the dowel’s ability to effectively transfer load across the joint. Furthermore, this void gives water and other particles a place to collect that will eventually corrode and potentially bind or lock the joint so that no thermal expansion is allowed. Once there is no longer load transferred across the joint, the load is transferred to the foundation and differential settlement of the adjacent slabs will occur.
Resumo:
The chemistry of today’s concrete mixture designs is complicated by many variables, including multiple sources of aggregate and cements and a plethora of sometimes incompatible mineral and chemical admixtures. Concrete paving has undergone significant changes in recent years as new materials have been introduced into concrete mixtures. Supplementary cementitious materials such as fly ash and ground granulated blast furnace slag are now regularly used. In addition, many new admixtures that were not even available a few years ago now have widespread usage. Adding to the complexity are construction variables such as weather, mix delivery times, finishing practices, and pavement opening schedules. Mixture materials, mix design, and pavement construction are not isolated steps in the concrete paving process. Each affects and is affected by the other in ways that determine overall pavement quality and long-term performance. Equipment and procedures commonly used to test concrete materials and concrete pavements have not changed in decades, leaving serious gaps in our ability to understand and control the factors that determine concrete durability. The concrete paving community needs tests that will adequately characterize the materials, predict interactions, and monitor the properties of the concrete.
Resumo:
Tumor Endothelial Marker-1 (TEM1/CD248) is a tumor vascular marker with high therapeutic and diagnostic potentials. Immuno-imaging with TEM1-specific antibodies can help to detect cancerous lesions, monitor tumor responses, and select patients that are most likely to benefit from TEM1-targeted therapies. In particular, near infrared(NIR) optical imaging with biomarker-specific antibodies can provide real-time, tomographic information without exposing the subjects to radioactivity. To maximize the theranostic potential of TEM1, we developed a panel of all human, multivalent Fc-fusion proteins based on a previously identified single chain antibody (scFv78) that recognizes both human and mouse TEM1. By characterizing avidity, stability, and pharmacokinectics, we identified one fusion protein, 78Fc, with desirable characteristics for immuno-imaging applications. The biodistribution of radiolabeled 78Fc showed that this antibody had minimal binding to normal organs, which have low expression of TEM1. Next, we developed a 78Fc-based tracer and tested its performance in different TEM1-expressing mouse models. The NIR imaging and tomography results suggest that the 78Fc-NIR tracer performs well in distinguishing mouse- or human-TEM1 expressing tumor grafts from normal organs and control grafts in vivo. From these results we conclude that further development and optimization of 78Fc as a TEM1-targeted imaging agent for use in clinical settings is warranted.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
The development of CT applications might become a public health problem if no effort is made on the justification and the optimisation of the examinations. This paper presents some hints to assure that the risk-benefit compromise remains in favour of the patient, especially when one deals with the examinations of young patients. In this context a particular attention has to be made on the justification of the examination. When performing the acquisition one needs to optimise the extension of the volume investigated together with the number of acquisition sequences used. Finally, the use of automatic exposure systems, now available on all the units, and the use of the Diagnostic Reference Levels (DRL) should allow help radiologists to control the exposure of their patients.