466 resultados para Maximization
Resumo:
In this paper we study decision making in situations where the individual’s preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.
Resumo:
This paper considers a long-term relationship between two agents who both undertake a costly action or investment that together produces a joint benefit. Agents have an opportunity to expropriate some of the joint benefit for their own use. Two cases are considered: (i) where agents are risk neutral and are subject to limited liability constraints and (ii) where agents are risk averse, have quasi-linear preferences in consumption and actions but where limited liability constraints do not bind. The question asked is how to structure the investments and division of the surplus over time so as to avoid expropriation. In the risk-neutral case, there may be an initial phase in which one agent overinvests and the other underinvests. However, both actions and surplus converge monotonically to a stationary state in which there is no overinvestment and surplus is at its maximum subject to the constraints. In the risk-averse case, there is no overinvestment. For this case, we establish that dynamics may or may not be monotonic depending on whether or not it is possible to sustain a first-best allocation. If the first-best allocation is not sustainable, then there is a trade-off between risk sharing and surplus maximization. In general, surplus will not be at its constrained maximum even in the long run.
Resumo:
Interaction, the act of mutual influence between two or more individuals, is an essential part of daily life and economic decisions. Yet, micro-foundations of interaction are unexplored. This paper presents a first attempt to this purpose. We study a decision procedure for interacting agents. According to our model, interaction occurs since individuals seek influence for those issues that they cannot solve on their own. Following a choice-theoretic approach, we provide simple properties that aid to detect interacting individuals. In this case, revealed preference analysis not only grants the underlying preferences but also the influence acquired. Our baseline model is based on two interacting individuals, though we extend the analysis to multi-individual environments.
Resumo:
Motivation. The study of human brain development in itsearly stage is today possible thanks to in vivo fetalmagnetic resonance imaging (MRI) techniques. Aquantitative analysis of fetal cortical surfacerepresents a new approach which can be used as a markerof the cerebral maturation (as gyration) and also forstudying central nervous system pathologies [1]. However,this quantitative approach is a major challenge forseveral reasons. First, movement of the fetus inside theamniotic cavity requires very fast MRI sequences tominimize motion artifacts, resulting in a poor spatialresolution and/or lower SNR. Second, due to the ongoingmyelination and cortical maturation, the appearance ofthe developing brain differs very much from thehomogenous tissue types found in adults. Third, due tolow resolution, fetal MR images considerably suffer ofpartial volume (PV) effect, sometimes in large areas.Today extensive efforts are made to deal with thereconstruction of high resolution 3D fetal volumes[2,3,4] to cope with intra-volume motion and low SNR.However, few studies exist related to the automatedsegmentation of MR fetal imaging. [5] and [6] work on thesegmentation of specific areas of the fetal brain such asposterior fossa, brainstem or germinal matrix. Firstattempt for automated brain tissue segmentation has beenpresented in [7] and in our previous work [8]. Bothmethods apply the Expectation-Maximization Markov RandomField (EM-MRF) framework but contrary to [7] we do notneed from any anatomical atlas prior. Data set &Methods. Prenatal MR imaging was performed with a 1-Tsystem (GE Medical Systems, Milwaukee) using single shotfast spin echo (ssFSE) sequences (TR 7000 ms, TE 180 ms,FOV 40 x 40 cm, slice thickness 5.4mm, in plane spatialresolution 1.09mm). Each fetus has 6 axial volumes(around 15 slices per volume), each of them acquired inabout 1 min. Each volume is shifted by 1 mm with respectto the previous one. Gestational age (GA) ranges from 29to 32 weeks. Mother is under sedation. Each volume ismanually segmented to extract fetal brain fromsurrounding maternal tissues. Then, in-homogeneityintensity correction is performed using [9] and linearintensity normalization is performed to have intensityvalues that range from 0 to 255. Note that due tointra-tissue variability of developing brain someintensity variability still remains. For each fetus, ahigh spatial resolution image of isotropic voxel size of1.09 mm is created applying [2] and using B-splines forthe scattered data interpolation [10] (see Fig. 1). Then,basal ganglia (BS) segmentation is performed on thissuper reconstructed volume. Active contour framework witha Level Set (LS) implementation is used. Our LS follows aslightly different formulation from well-known Chan-Vese[11] formulation. In our case, the LS evolves forcing themean of the inside of the curve to be the mean intensityof basal ganglia. Moreover, we add local spatial priorthrough a probabilistic map created by fitting anellipsoid onto the basal ganglia region. Some userinteraction is needed to set the mean intensity of BG(green dots in Fig. 2) and the initial fitting points forthe probabilistic prior map (blue points in Fig. 2). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed as described in [8]. Results.The case study presented here has 29 weeks of GA. Thehigh resolution reconstructed volume is presented in Fig.1. The steps of BG segmentation are shown in Fig. 2.Overlap in comparison with manual segmentation isquantified by the Dice similarity index (DSI) equal to0.829 (values above 0.7 are considered a very goodagreement). Such BG segmentation has been applied on 3other subjects ranging for 29 to 32 GA and the DSI hasbeen of 0.856, 0.794 and 0.785. Our segmentation of theinner (red and blue contours) and outer cortical surface(green contour) is presented in Fig. 3. Finally, torefine the results we include our WM segmentation in theFreesurfer software [12] and some manual corrections toobtain Fig.4. Discussion. Precise cortical surfaceextraction of fetal brain is needed for quantitativestudies of early human brain development. Our workcombines the well known statistical classificationframework with the active contour segmentation forcentral gray mater extraction. A main advantage of thepresented procedure for fetal brain surface extraction isthat we do not include any spatial prior coming fromanatomical atlases. The results presented here arepreliminary but promising. Our efforts are now in testingsuch approach on a wider range of gestational ages thatwe will include in the final version of this work andstudying as well its generalization to different scannersand different type of MRI sequences. References. [1]Guibaud, Prenatal Diagnosis 29(4) (2009). [2] Rousseau,Acad. Rad. 13(9), 2006, [3] Jiang, IEEE TMI 2007. [4]Warfield IADB, MICCAI 2009. [5] Claude, IEEE Trans. Bio.Eng. 51(4) (2004). [6] Habas, MICCAI (Pt. 1) 2008. [7]Bertelsen, ISMRM 2009 [8] Bach Cuadra, IADB, MICCAI 2009.[9] Styner, IEEE TMI 19(39 (2000). [10] Lee, IEEE Trans.Visual. And Comp. Graph. 3(3), 1997, [11] Chan, IEEETrans. Img. Proc, 10(2), 2001 [12] Freesurfer,http://surfer.nmr.mgh.harvard.edu.
Resumo:
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.
Resumo:
Coronary artery calcification (CAC) is quantified based on a computed tomography (CT) scan image. A calcified region is identified. Modified expectation maximization (MEM) of a statistical model for the calcified and background material is used to estimate the partial calcium content of the voxels. The algorithm limits the region over which MEM is performed. By using MEM, the statistical properties of the model are iteratively updated based on the calculated resultant calcium distribution from the previous iteration. The estimated statistical properties are used to generate a map of the partial calcium content in the calcified region. The volume of calcium in the calcified region is determined based on the map. The experimental results on a cardiac phantom, scanned 90 times using 15 different protocols, demonstrate that the proposed method is less sensitive to partial volume effect and noise, with average error of 9.5% (standard deviation (SD) of 5-7mm(3)) compared with 67% (SD of 3-20mm(3)) for conventional techniques. The high reproducibility of the proposed method for 35 patients, scanned twice using the same protocol at a minimum interval of 10 min, shows that the method provides 2-3 times lower interscan variation than conventional techniques.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.
Resumo:
Les Mesures de Semblança Quàntica Molecular (MSQM) requereixen la maximització del solapament de les densitats electròniques de les molècules que es comparen. En aquest treball es presenta un algorisme de maximització de les MSQM, que és global en el límit de densitatselectròniques deformades a funcions deltes de Dirac. A partir d'aquest algorisme se'n deriva l'equivalent per a densitats no deformades
Resumo:
Institutional and organizational variety is increasingly characterizing advanced economic systems. While traditional economic theories have focused almost exclusively on profit-maximizing (i.e., for-profit) enterprises and on publicly-owned organizations, the increasing relevance of non-profit organizations, and especially of social enterprises, requires scientists to reflect on a new comprehensive economic approach for explaining this organizational variety. This paper examines the main limitations of the orthodox and institutional theories and asserts the need for creating and testing a new theoretical framework, which considers the way in which diverse enterprises pursue their goals, the diverse motivations driving actors and organizations, and the different learning patterns and routines within organizations. The new analytical framework proposed in the paper draws upon recent developments in the theories of the firm, mainly of an evolutionary and behavioral kind. The firm is interpreted as a coordination mechanism of economic activity, and one whose objectives need not coincide with profit maximization. On the other hand, economic agents driven by motivational complexity and intrinsic, non-monetary motivation play a crucial role in forming firm activity over and above purely monetary and financial objectives. The new framework is thought to be particularly suitable to correctly interpret the emergence and role of nontraditional organizational and ownership forms that are not driven by the profit motive (non-profit organizations), mainly recognized in the legal forms of cooperative firms, non-profit organizations and social enterprises. A continuum of organizational forms ranging from profit making activities to public benefit activities, and encompassing mutual benefit organizations as its core constituent, is envisaged and discussed.
Information overload, choice deferral, and moderating role of need for cognition: Empirical evidence
Resumo:
ABSTRACT Choice deferral due to information overload is an undesirable result of competitive environments. The neoclassical maximization models predict that choice avoidance will not increase as more information is offered to consumers. The theories developed in the consumer behavior field predict that some properties of the environment may lead to behavioral effects and an increase in choice avoidance due to information overload. Based on stimuli generated experimentally and tested among 1,000 consumers, this empirical research provides evidence for the presence of behavioral effects due to information overload and reveals the different effects of increasing the number of options or the number of attributes. This study also finds that the need for cognition moderates these behavioral effects, and it proposes psychological processes that may trigger the effects observed.
Resumo:
Individual-as-maximizing agent analogies result in a simple understanding of the functioning of the biological world. Identifying the conditions under which individuals can be regarded as fitness maximizing agents is thus of considerable interest to biologists. Here, we compare different concepts of fitness maximization, and discuss within a single framework the relationship between Hamilton's (J Theor Biol 7: 1-16, 1964) model of social interactions, Grafen's (J Evol Biol 20: 1243-1254, 2007a) formal Darwinism project, and the idea of evolutionary stable strategies. We distinguish cases where phenotypic effects are additive separable or not, the latter not being covered by Grafen's analysis. In both cases it is possible to define a maximand, in the form of an objective function phi(z), whose argument is the phenotype of an individual and whose derivative is proportional to Hamilton's inclusive fitness effect. However, this maximand can be identified with the expression for fecundity or fitness only in the case of additive separable phenotypic effects, making individual-as-maximizing agent analogies unattractive (although formally correct) under general situations of social interactions. We also feel that there is an inconsistency in Grafen's characterization of the solution of his maximization program by use of inclusive fitness arguments. His results are in conflict with those on evolutionary stable strategies obtained by applying inclusive fitness theory, and can be repaired only by changing the definition of the problem.
Resumo:
As guidelines de cardiologia nuclear europeia e americanas não são específicas na escolha dos melhores parâmetros de reconstrução de imagem a utilizar na Cintigrafia de Perfusão do Miocárdio (CPM). Assim, o presente estudo teve como objectivo estabelecer e comparar o efeito dos parâmetros quantitativos dos métodos de reconstrução: Retroprojecção Filtrada (FBP) e Ordered ‑Sub‑set Expectation Maximization (OSEM). Métodos: Foi utilizado um fantoma cardíaco, cujos valores do volume telediastólico (VTD), volume telesistólico (VTS) e fracção de ejecção ventricular esquerda (FEVE) eram conhecidos. O software Quantitative Gated SPECT/Quantitative Perfusion SPECT foi utilizado em modo semi‑automático, a fim de obter esses parâmetros quantitativos. O filtro Butterworth foi usado no FBP com as frequências de corte entre 0,2 e 0,8 ciclos/pixel combinadas com as ordens de 5, 10, 15 e 20. Na reconstrução OSEM, foram utilizados os subconjuntos 2, 4, 6, 8, 10, 12 e 16, combinados com os números de iterações de 2, 4, 6, 8, 10, 12, 16, 32 e 64. Durante a reconstrução OSEM efectuou‑se uma outra reconstrução baseada no número de iterações equivalentes - Expectation‑Maximization (EM) 12, 14, 16, 18, 20, 22, 26, 28, 30 e 32. Resultados: Após a reconstrução com FBP verificou‑se que os valores de VTD e VTS aumentavam com o aumento da frequência de corte, enquanto o valor da FEVE diminui. Esse mesmo padrão é verificado na reconstrução OSEM. No entanto, com OSEM há uma estimativa mais precisa dos parâmetros quantitativos, especialmente com as combinações 2I × 10S e 12S × 2I. Conclusão: A reconstrução OSEM apresenta uma melhor estimativa dos parâmetros quantitativos e uma melhor qualidade de imagem do que a reconstrução com FBP. Este estudo recomenda o uso de 2 iterações com 10 ou 12 subconjuntos para a reconstrução OSEM e uma frequência de corte de 0,5 ciclos/pixel com as ordens 5, 10 ou 15 para a reconstrução com FBP como a melhor estimativa para a quantificação da FEVE através da CPM.
Resumo:
For single-user MIMO communication with uncoded and coded QAM signals, we propose bit and power loading schemes that rely only on channel distribution information at the transmitter. To that end, we develop the relationship between the average bit error probability at the output of a ZF linear receiver and the bit rates and powers allocated at the transmitter. This relationship, and the fact that a ZF receiver decouples the MIMO parallel channels, allow leveraging bit loading algorithms already existing in the literature. We solve dual bit rate maximization and power minimization problems and present performance resultsthat illustrate the gains of the proposed scheme with respect toa non-optimized transmission.
Resumo:
Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.