103 resultados para multi attribute utility theory
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Time-domain reflectometry (TDR) is an important technique to obtain series of soil water content measurements in the field. Diode-segmented probes represent an improvement in TDR applicability, allowing measurements of the soil water content profile with a single probe. In this paper we explore an extensive soil water content dataset obtained by tensiometry and TDR from internal drainage experiments in two consecutive years in a tropical soil in Brazil. Comparisons between the variation patterns of the water content estimated by both methods exhibited evidences of deterioration of the TDR system during this two year period at field conditions. The results showed consistency in the variation pattern for the tensiometry data, whereas TDR estimates were inconsistent, with sensitivity decreasing over time. This suggests that difficulties may arise for the long-term use of this TDR system under tropical field conditions. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we investigate the effects of societal values and life stage on subordinate influence ethics. Based on the evolving crossvergence theory of macro-level predictors of values evolution, we demonstrate the applicability of crossvergence theory in the micro-level context. Furthermore, our study provides the first empirical multi-level analysis of influence ethics utilizing a multi pie-country sample. Thus, we illustrate how the breath of crossvergence can be expanded to provide a multi-level theoretical foundation of values and behavior evolution across cultures. Specifically, we integrate micro-level life stage theory and macro-level societal culture theory to concurrently assess the contributions of each theory in explaining subordinate influence ethics across the diverse societies of Brazil. China, Germany and the U.S. Consistent with previous research, we found significant societal differences in influence ethics. However, we also found that life stage theory played a significant role in understanding influence ethics. Thus, our findings expand the crossvergence perspective on societal change, indicating that key micro-level predictors (e.g., life stage) should be included in cross-cultural research. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In the assignment game of Shapley and Shubik [Shapley, L.S., Shubik, M., 1972. The assignment game. I. The core, International journal of Game Theory 1, 11-130] agents are allowed to form one partnership at most. That paper proves that, in the context of firms and workers, given two stable payoffs for the firms there is a stable payoff which gives each firm the larger of the two amounts and also one which gives each of them the smaller amount. Analogous result applies to the workers. Sotomayor [Sotomayor, M., 1992. The multiple partners game. In: Majumdar, M. (Ed.), Dynamics and Equilibrium: Essays in Honor to D. Gale. Mcmillian, pp. 322-336] extends this analysis to the case where both types of agents may form more than one partnership and an agent`s payoff is multi-dimensional. Instead, this note concentrates in the total payoff of the agents. It is then proved the rather unexpected result that again the maximum of any pair of stable payoffs for the firms is stable but the minimum need not be, even if we restrict the multiplicity of partnerships to one of the sides. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Stability of matchings was proved to be a new cooperative equilibrium concept in Sotomayor (Dynamics and equilibrium: essays in honor to D. Gale, 1992). That paper introduces the innovation of treating as multi-dimensional the payoff of a player with a quota greater than one. This is done for the many-to-many matching model with additively separable utilities, for which the stability concept is defined. It is then proved, via linear programming, that the set of stable outcomes is nonempty and it may be strictly bigger than the set of dual solutions and strictly smaller than the core. The present paper defines a general concept of stability and shows that this concept is a natural solution concept, stronger than the core concept, for a much more general coalitional game than a matching game. Instead of mutual agreements inside partnerships, the players are allowed to make collective agreements inside coalitions of any size and to distribute his labor among them. A collective agreement determines the level of labor at which the coalition operates and the division, among its members, of the income generated by the coalition. An allocation specifies a set of collective agreements for each player.
Resumo:
Starting with an initial price vector, prices are adjusted in order to eliminate the excess demand and at the same time to keep the transfers to the sellers as low as possible. In each step of the auction, to which set of sellers should those transfers be made is the key issue in the description of the algorithm. We assume additively separable utilities and introduce a novel distinction by considering multiple sellers owing multiple identical objects and multiple buyers with an exogenously defined quota, consuming more than one object but at most one unit of a seller`s good and having multi-dimensional payoffs. This distinction induces a necessarily more complicated construction of the over-demanded sets than the constructions of these sets for the other assignment games. For this approach, our mechanism yields the buyer-optimal competitive equilibrium payoff, which equals the buyer-optimal stable payoff. The symmetry of the model allows to getting the seller-optimal stable payoff and the seller-optimal competitive equilibrium payoff can then be also derived.
Resumo:
This paper analyses the applicability of the main enterprise internationalization theories to the entry of the multinational corporations into Brazil, throughout five phases of Brazilian economy, from 1850 to nowadays. It seeks to verify the explanation power of each theory over the FDI flows in Brazil. It concludes that there is a contingency relation between the theories and the phases of the economy, and. it shows such relationship in a table. In addition, it concludes that the most powerful theory along the researched period was Dunning`s eclectic paradigm, mainly due to the Localization considerations. Theoretical propositions are put forward as a contribution to future research.
Resumo:
Discussion opposing the Theory of the Firm to the Theory of Stakeholders are contemporaneous and polemical. One focal point of such debates refers to which objective-function companies, should choose, whether that of the shareholders or that of the stakeholders, and whether it is possible to opt for both simultaneously. Several empirical studies. have attempted-to test a possible correlation between both functions, and there has not been any consensus-so far. The objective of the present research is to examine a gap in such discussions: is there (or not) a subordination of the stakeholders` objective-function to that of the shareholders? The research is empirical,and analytical and employs quantitative methods. Hypotheses were tested and data analyzed by using non-parametrical (chi-square test) and parametrical procedures (frequency. correlation `coefficient). Secondary data was collected from he Economitica database and from the Brazilian Institute of Social and-Economic Analyses (IBASE) website, relative to public companies that have published their Social Balance Statements following the IBASE model from 1999 to 2006, whose sample amounted to 65 companies; In order to assess the objective-function of shareholders a proxy was created based on the following three indices: ROE (return on equity), EnterpriseValue and Tobin`s Q. In order to assess the objective-function of stakeholders a proxy was created by employing the following IBASE social balance indices: internal ones (ISI), external ones (ISE), and environmental ones (IAM). The results have shown no evidence of subordination of stakeholders` objective-function to that of the shareholders in analyzed companies, negating initial expectations and calling for deeper investigation of results. Its main conclusion, which states that the attempted subordination does not take place, is limited to the sample herein investigated and calls for ongoing research aiming at improvements which may lead to sample enlargement and, as a consequence, may make feasible the application of other statistical techniques which may yield a more thorough, analysis of the studied phenomehon.
Resumo:
This note is motivated from some recent papers treating the problem of the existence of a solution for abstract differential equations with fractional derivatives. We show that the existence results in [Agarwal et al. (2009) [1], Belmekki and Benchohra (2010) [2], Darwish et al. (2009) [3], Hu et al. (2009) [4], Mophou and N`Guerekata (2009) [6,7], Mophou (2010) [8,9], Muslim (2009) [10], Pandey et al. (2009) [11], Rashid and El-Qaderi (2009) [12] and Tai and Wang (2009) [13]] are incorrect since the considered variation of constant formulas is not appropriate. In this note, we also consider a different approach to treat a general class of abstract fractional differential equations. (C) 2010 Elsevier Ltd. All rights reserved.
Coronary CT angiography using 64 detector rows: methods and design of the multi-centre trial CORE-64
Resumo:
Multislice computed tomography (MSCT) for the noninvasive detection of coronary artery stenoses is a promising candidate for widespread clinical application because of its non-invasive nature and high sensitivity and negative predictive value as found in several previous studies using 16 to 64 simultaneous detector rows. A multi-centre study of CT coronary angiography using 16 simultaneous detector rows has shown that 16-slice CT is limited by a high number of nondiagnostic cases and a high false-positive rate. A recent meta-analysis indicated a significant interaction between the size of the study sample and the diagnostic odds ratios suggestive of small study bias, highlighting the importance of evaluating MSCT using 64 simultaneous detector rows in a multi-centre approach with a larger sample size. In this manuscript we detail the objectives and methods of the prospective ""CORE-64"" trial (""Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography using 64 Detectors""). This multi-centre trial was unique in that it assessed the diagnostic performance of 64-slice CT coronary angiography in nine centres worldwide in comparison to conventional coronary angiography. In conclusion, the multi-centre, multi-institutional and multi-continental trial CORE-64 has great potential to ultimately assess the per-patient diagnostic performance of coronary CT angiography using 64 simultaneous detector rows.
Resumo:
Context: Genetic polymorphisms at the perilipin (PLIN) locus have been investigated for their potential utility as markers for obesity and metabolic syndrome (MS). We examined in obese children and adolescents (OCA) aged 7-14 yr the association of single-nucleotide polymorphisms (SNP) at the PLIN locus with anthropometric, metabolic traits, and weight loss after 20-wk multi-disciplinary behavioral and nutritional treatment without medication. Design: A total of 234 OCA [body mass index (BMI = 30.4 +/- 4.4 kg/m(2); BMI Z-score = 2.31 +/- 0.4) were evaluated at baseline and after intervention. We genotyped four SNPs (PLIN1 6209T -> C, PLIN4 11482G -> A, PLIN5 13041A -> G, and PLIN6 14995A -> T). Results: Allele frequencies were similar to other populations, PLIN1 and PLIN4 were in linkage disequilibrium (D` = 0.999; P < 0.001). At baseline, no anthropometric differences were observed, but minor allele A at PLIN4 was associated with higher triglycerides (111 +/- 49 vs. 94 +/- 42 mg/dl; P = 0.003), lower high-density lipoprotein cholesterol (40 +/- 9 vs. 44 +/- 10 mg/dl; P = 0.003) and higher homeostasis model assessment for insulin resistance (4.0 +/- 2.3 vs. 3.5 +/- 2.1; P +/- 0.015). Minor allele A at PLIN4 was associated with MS risk (age and sex adjusted) hazard ratio 2.4 (95% confidence interval = 1.1-4.9) for genotype GA and 3.5 (95% confidence interval = 1.2-9.9) for AA. After intervention, subjects carrying minor allele T at PLIN6 had increased weight loss (3.3 +/- 3.7 vs. 1.9 +/- 3.4 kg; P = 0.002) and increased loss of the BMI Z-score (0.23 +/- 0.18 vs. 0.18 +/- 0.15; P +/- 0.003). Due to group size, risk of by-chance findings cannot be excluded. Conclusion: The minor A allele at PLIN4 was associated with higher risk of MS at baseline, whereas the PLIN6 SNP was associated with better weight loss, suggesting that these polymorphisms may predict outcome strategies based on multidisciplinary treatment for OCA. (J Clin Endocrinol Metab 93: 4933-4940, 2008)
Resumo:
Aim: To look at the characteristics of Postgraduate Hospital Educational Environment Measure (PHEEM) using data from the UK, Brazil, Chile and the Netherlands, and to examine the reliability and characteristics of PHEEM, especially how the three PHEEM subscales fitted with factors derived statistically from the data sets. Methods: Statistical analysis of PHEEM scores from 1563 sets of data, using reliability analysis, exploratory factor analysis and correlations of factors derived with the three defined PHEEM subscales. Results: PHEEM was very reliable with an overall Cronbach`s alpha of 0.928. Three factors were derived by exploratory factor analysis. Factor One correlated most strongly with the teaching subscale (R=0.802), Factor Two correlated most strongly with the role autonomy subscale (R=0.623) and Factor Three correlated most strongly with the social support subscale (R=0.538). Conclusions: PHEEM is a multi-dimensional instrument. Overall, it is very reliable. There is a good fit of the three defined subscales, derived by qualitative methods, with the three principal factors derived from the data by exploratory factor analysis.