930 resultados para Symmetric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertation presented to obtain the PhD degree in Biology

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Genética Molecular e Biomedicina

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We intend to study the algebraic structure of the simple orthogonal models to use them, through binary operations as building blocks in the construction of more complex orthogonal models. We start by presenting some matrix results considering Commutative Jordan Algebras of symmetric matrices, CJAs. Next, we use these results to study the algebraic structure of orthogonal models, obtained by crossing and nesting simpler ones. Then, we study the normal models with OBS, which can also be orthogonal models. We intend to study normal models with OBS (Orthogonal Block Structure), NOBS (Normal Orthogonal Block Structure), obtaining condition for having complete and suffcient statistics, having UMVUE, is unbiased estimators with minimal covariance matrices whatever the variance components. Lastly, see ([Pereira et al. (2014)]), we study the algebraic structure of orthogonal models, mixed models whose variance covariance matrices are all positive semi definite, linear combinations of known orthogonal pairwise orthogonal projection matrices, OPOPM, and whose least square estimators, LSE, of estimable vectors are best linear unbiased estimator, BLUE, whatever the variance components, so they are uniformly BLUE, UBLUE. From the results of the algebraic structure we will get explicit expressions for the LSE of these models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A search for heavy Majorana neutrinos in events containing a pair of high-pT leptons of the same charge and high-pT jets is presented. The search uses 20.3fb−1 of pp collision data collected with the ATLAS detector at the CERN Large Hadron Collider with a centre-of-mass energy of s√=8 TeV. The data are found to be consistent with the background-only hypothesis based on the Standard Model expectation. In the context of a Type-I seesaw mechanism, limits are set on the production cross-section times branching ratio for production of heavy Majorana neutrinos in the mass range between 100 and 500 GeV. The limits are subsequently interpreted as limits on the mixing between the heavy Majorana neutrinos and the Standard Model neutrinos. In the context of a left-right symmetric model, limits on the production cross-section times branching ratio are set with respect to the masses of heavy Majorana neutrinos and heavy gauge bosons WR and Z′.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the low frequency absorption cross section of spherically symmetric nonextremal d-dimensional black holes. In the presence of α′ corrections, this quantity must have an explicit dependence on the Hawking temperature of the form 1/TH. This property of the low frequency absorption cross section is shared by the D1-D5 system from type IIB superstring theory already at the classical level, without α′ corrections. We apply our formula to the simplest example, the classical d-dimensional Reissner-Nordstr¨om solution, checking that the obtained formula for the cross section has a smooth extremal limit. We also apply it for a d-dimensional Tangherlini-like solution with α′3 corrections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese de Doutoramento em Biologia de Plantas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NIPE WP 04/ 2016

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT The smallnose fanskate, Sympterygia bonapartii Müller & Henle, 1841 is one of the most disembarked items in commercial harbors in Argentina. In this work, the microscopic architecture of mature male gonads and the dynamics of cysts development are analyzed as a contribution to awareness of the reproductive biology of the species. Some biological data related to reproduction are given as well. Two seasons were sampled (fall and spring) and length classes's frequency distribution and maturity stages frequency distribution are given. Size at first sexual maturation for males was estimated at 57 cm of total length. Testes are symmetric, peer, lobed, with several germinal zones. Inside the gonads, there are many spermatocysts, containing reproductive cells at the same developmental stage. On the basis of their cytological and microanatomical features, several maturative degrees of the spermatogenic series were differentiated. Few Leydig cells were recognized at the interstitial tissue among cysts. The microscopic and semiquantitative analysis performed in this work provides morphological information about male gametogenesis and some biological data for the North Patagonian population of this economically and ecologically important species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is known that, in a locally presentable category, localization exists with respect to every set of morphisms, while the statement that localization with respect to every (possibly proper) class of morphisms exists in locally presentable categories is equivalent to a large-cardinal axiom from set theory. One proves similarly, on one hand, that homotopy localization exists with respect to sets of maps in every cofibrantly generated, left proper, simplicial model category M whose underlying category is locally presentable. On the other hand, as we show in this article, the existence of localization with respect to possibly proper classes of maps in a model category M satisfying the above assumptions is implied by a large-cardinal axiom called Vopënka's principle, although we do not know if the reverse implication holds. We also show that, under the same assumptions on M, every endofunctor of M that is idempotent up to homotopy is equivalent to localization with respect to some class S of maps, and if Vopënka's principle holds then S can be chosen to be a set. There are examples showing that the latter need not be true if M is not cofibrantly generated. The above assumptions on M are satisfied by simplicial sets and symmetric spectra over simplicial sets, among many other model categories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using the continuation method we prove that the circular and the elliptic symmetric periodic orbits of the planar rotating Kepler problem can be continued into periodic orbits of the planar collision restricted 3–body problem. Additionally, we also continue to this restricted problem the so called “comets orbits”.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here we describe the results of some computational explorations in Thompson's group F. We describe experiments to estimate the cogrowth of F with respect to its standard finite generating set, designed to address the subtle and difficult question whether or not Thompson's group is amenable. We also describe experiments to estimate the exponential growth rate of F and the rate of escape of symmetric random walks with respect to the standard generating set.