851 resultados para interval-valued fuzzy sets (IVFS)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methadone is administered as a chiral mixture of (R,S)-methadone. The opioid effect is mainly mediated by (R)-methadone, whereas (S)-methadone blocks the human ether-à-go-go-related gene (hERG) voltage-gated potassium channel more potently, which can cause drug-induced long QT syndrome, leading to potentially lethal ventricular tachyarrhythmias. To investigate whether substitution of (R,S)-methadone by (R)-methadone could reduce the corrected QT (QTc) interval, (R,S)-methadone was replaced by (R)-methadone (half-dose) in 39 opioid-dependent patients receiving maintenance treatment for 14 days. (R)-methadone was then replaced by the initial dose of (R,S)-methadone for 14 days (n = 29). Trough (R)-methadone and (S)-methadone plasma levels and electrocardiogram measurements were taken. The Fridericia-corrected QT (QTcF) interval decreased when (R,S)-methadone was replaced by a half-dose of (R)-methadone; the median (interquartile range [IQR]) values were 423 (398-440) milliseconds (ms) and 412 (395-431) ms (P = .06) at days 0 and 14, respectively. Using a univariate mixed-effect linear model, the QTcF value decreased by a mean of -3.9 ms (95% confidence interval [CI], -7.7 to -0.2) per week (P = .04). The QTcF value increased when (R)-methadone was replaced by the initial dose of (R,S)-methadone for 14 days; median (IQR) values were 424 (398-436) ms and 424 (412-443) ms (P = .01) at days 14 and 28, respectively. The univariate model showed that the QTcF value increased by a mean of 4.7 ms (95% CI, 1.3-8.1) per week (P = .006). Substitution of (R,S)-methadone by (R)-methadone reduces the QTc interval value. A safer cardiac profile of (R)-methadone is in agreement with previous in vitro and pharmacogenetic studies. If the present results are confirmed by larger studies, (R)-methadone should be prescribed instead of (R,S)-methadone to reduce the risk of cardiac toxic effects and sudden death.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relation algebras is one of the state-of-the-art means used by mathematicians and computer scientists for solving very complex problems. As a result, a computer algebra system for relation algebras called RelView has been developed at Kiel University. RelView works within the standard model of relation algebras. On the other hand, relation algebras do have other models which may have different properties. For example, in the standard model we always have L;L=L (the composition of two (heterogeneous) universal relations yields a universal relation). This is not true in some non-standard models. Therefore, any example in RelView will always satisfy this property even though it is not true in general. On the other hand, it has been shown that every relation algebra with relational sums and subobjects can be seen as matrix algebra similar to the correspondence of binary relations between sets and Boolean matrices. The aim of my research is to develop a new system that works with both standard and non-standard models for arbitrary relations using multiple-valued decision diagrams (MDDs). This system will implement relations as matrix algebras. The proposed structure is a library written in C which can be imported by other languages such as Java or Haskell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rough Set Data Analysis (RSDA) is a non-invasive data analysis approach that solely relies on the data to find patterns and decision rules. Despite its noninvasive approach and ability to generate human readable rules, classical RSDA has not been successfully used in commercial data mining and rule generating engines. The reason is its scalability. Classical RSDA slows down a great deal with the larger data sets and takes much longer times to generate the rules. This research is aimed to address the issue of scalability in rough sets by improving the performance of the attribute reduction step of the classical RSDA - which is the root cause of its slow performance. We propose to move the entire attribute reduction process into the database. We defined a new schema to store the initial data set. We then defined SOL queries on this new schema to find the attribute reducts correctly and faster than the traditional RSDA approach. We tested our technique on two typical data sets and compared our results with the traditional RSDA approach for attribute reduction. In the end we also highlighted some of the issues with our proposed approach which could lead to future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide a survey of the literature on ranking sets of objects. The interpretations of those set rankings include those employed in the theory of choice under complete uncertainty, rankings of opportunity sets, set rankings that appear in matching theory, and the structure of assembly preferences. The survey is prepared for the Handbook of Utility Theory, vol. 2, edited by Salvador Barberà, Peter Hammond, and Christian Seidl, to be published by Kluwer Academic Publishers. The chapter number is provisional.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following properties of the core of a one well-known: (i) the core is non-empty; (ii) the core is a lattice; and (iii) the set of unmatched agents is identical for any two matchings belonging to the core. The literature on two-sided matching focuses almost exclusively on the core and studies extensively its properties. Our main result is the following characterization of (von Neumann-Morgenstern) stable sets in one-to-one matching problem only if it is a maximal set satisfying the following properties : (a) the core is a subset of the set; (b) the set is a lattice; (c) the set of unmatched agents is identical for any two matchings belonging to the set. Furthermore, a set is a stable set if it is the unique maximal set satisfying properties (a), (b) and (c). We also show that our main result does not extend from one-to-one matching problems to many-to-one matching problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider general allocation problems with indivisibilities where agents' preferences possibly exhibit externalities. In such contexts many different core notions were proposed. One is the gamma-core whereby blocking is only allowed via allocations where the non-blocking agents receive their endowment. We show that if there exists an allocation rule satisfying ‘individual rationality’, ‘efficiency’, and ‘strategy-proofness’, then for any problem for which the gamma-core is non-empty, the allocation rule must choose a gamma-core allocation and all agents are indifferent between all allocations in the gamma-core. We apply our result to housing markets, coalition formation and networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Soit une famille de couples (ft,Xt)t∈J , où J est un intervalle, ft est une fonction lisse à valeurs réelles définie sur une variété lisse et compacte V , et Xt est un pseudo-gradient associé à la fonction ft. L’objet de ce mémoire est l’étude des bifurcations subies par les complexes de Morse associés à ces couples. Deux approches sont utilisées : l’étude directe des bifurcations et l’approche par homotopie. On montre que finalement ces deux approches permettent d’obtenir les mêmes résultats d’un point de vue fonctoriel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis an attempt to develop the properties of basic concepts in fuzzy graphs such as fuzzy bridges, fuzzy cutnodes, fuzzy trees and blocks in fuzzy graphs have been made. The notion of complement of a fuzzy graph is modified and some of its properties are studied. Since the notion of complement has just been initiated, several properties of G and G available for crisp graphs can be studied for fuzzy graphs also. Mainly focused on fuzzy trees defined by Rosenfeld in [10] , several other types of fuzzy trees are defined depending on the acyclicity level of a fuzzy graph. It is observed that there are selfcentered fuzzy trees. Some operations on fuzzy graphs and prove that complement of the union two fuzzy graphs is the join of their complements and complement of the join of two fuzzy graphs is union of their complements. The study of fuzzy graphs made in this thesis is far from being complete. The wide ranging applications of graph theory and the interdisciplinary nature of fuzzy set theory, if properly blended together could pave a way for a substantial growth of fuzzy graph theory.