12 resultados para non-ideal problems
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We propose a new family of risk measures, called GlueVaR, within the class of distortion risk measures. Analytical closed-form expressions are shown for the most frequently used distribution functions in financial and insurance applications. The relationship between Glue-VaR, Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) is explained. Tail-subadditivity is investigated and it is shown that some GlueVaR risk measures satisfy this property. An interpretation in terms of risk attitudes is provided and a discussion is given on the applicability in non-financial problems such as health, safety, environmental or catastrophic risk management
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
We construct and analyze non-overlapping Schwarz methods for a preconditioned weakly over-penalized symmetric interior penalty (WOPSIP) method for elliptic problems.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable
Resumo:
Antecedentes: La parada cardiorrespiratoria es uno de los principalesproblemas sanitarios en los países desarrollados, además de por la mortalidadproducida, por las importantes repercusiones neurológicas posteriores quepresentan las personas que sobreviven. Hasta un 64% de los supervivientespuede presentar secuelas de gravedad, y tan solo un 1,4% queda exento dealgún tipo de alteración neurológica. Distintos ensayos clínicos, muestran quela hipotermia inducida ligera, es decir, el descenso controlado de latemperatura corporal mejora la supervivencia y los daños neurológicos en lospacientes adultos inconscientes tras una resucitación cardiopulmonar. Sinembargo, no está del todo claro cuáles son los pacientes más indicados pararecibir la terapia, la técnica de inducción ideal, la temperatura objetivo, suduración y la tasa idónea de recalentamiento.Objetivos: El objetivo del estudio es conocer la técnica de hipotermiaterapéutica como cuidado posresucitación tras sufrir una parada cardíaca.Metodología: Para ello, se realizó una búsqueda bibliográfica a través de lassiguientes bases de datos: CSIC, Medline PubMed, CINAHL, BibliotecaCochrane, Cuiden Plus, Dialnet, Scopus y ScienceDirect. Finalmente, seaceptaron 8 artículos que pertenecían a los criterios de inclusión: revisionessistemáticas, ensayos clínicos, revisiones bibliográficas y documentos deconsenso tras consejo de expertos, en español o inglés, publicados desde elaño 2005 hasta el año 2013 cuyos sujetos de estudio son adultos.Resultados: en la actualidad, se recomienda que los pacientes adultosinconscientes, con recuperación de la circulación espontánea tras una paradacardíaca extrahospitalaria, deben ser enfriados a 32-34ºC durante un periodode 12-24 horas cuando el ritmo inicial sea fibrilación ventricular. Se establecen4 periodos de tratamiento: inducción (desde el ingreso en la unidad hasta quese alcanzan los 33ºC), mantenimiento (desde el logro de los 33ºC hasta 24horas después), recalentamiento (12 horas de incremento de la temperatura,hasta alcanzar los 37ºC) y estabilización térmica (12 horas posteriores aalcanzar los 37ºC). Los métodos de inducción y mantenimiento de la hipotermiason diversos y se establecen dos grupos: técnicas invasivas y no invasivas.Palabras clave: hipotermia inducida, parada cardíaca, técnicas enfriamiento
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
“The liquidity crisis of the Spanish banks is largely due to the lack of confidence of foreign investors and, therefore, the changes that occur in the legislation should not affect the credibility, stability, legal certainty, predictability that markets expect”.Sergio Nasarre (2011)In the current situation of economic crisis, many people have found they can no longer pay back the mortgage loans that were granted to them in order to purchase a dwelling. It is for this reason that, in light of the economic, political and social problems this poses, our paper studies the state of the Spanish real-estate system and of foreclosure, paying special attention to the solution that has been proposed recently as the best option for debtors that cannot make their mort-gage payments: non-recourse mortgaging. We analyze this proposal from legal and economic perspectives in order to fully understand the effects that this change could imply. At the same time, this paper will also examine several alternatives we believe would ameliorate the situation of mortgage-holders, among them legal reforms, mortgage insurance, and non-recourse mortgaging itself.
Resumo:
This special issue aims to cover some problems related to non-linear and nonconventional speech processing. The origin of this volume is in the ISCA Tutorial and Research Workshop on Non-Linear Speech Processing, NOLISP’09, held at the Universitat de Vic (Catalonia, Spain) on June 25–27, 2009. The series of NOLISP workshops started in 2003 has become a biannual event whose aim is to discuss alternative techniques for speech processing that, in a sense, do not fit into mainstream approaches. A selected choice of papers based on the presentations delivered at NOLISP’09 has given rise to this issue of Cognitive Computation.
Resumo:
This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.