867 resultados para Possibility-Theoretical Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high level of unemployment is one of the major problems in most European countries nowadays. Hence, the demand for small area labor market statistics has rapidly increased over the past few years. The Labour Force Survey (LFS) conducted by the Portuguese Statistical Office is the main source of official statistics on the labour market at the macro level (e.g. NUTS2 and national level). However, the LFS was not designed to produce reliable statistics at the micro level (e.g. NUTS3, municipalities or further disaggregate level) due to small sample sizes. Consequently, traditional design-based estimators are not appropriate. A solution to this problem is to consider model-based estimators that "borrow information" from related areas or past samples by using auxiliary information. This paper reviews, under the model-based approach, Best Linear Unbiased Predictors and an estimator based on the posterior predictive distribution of a Hierarchical Bayesian model. The goal of this paper is to analyze the possibility to produce accurate unemployment rate statistics at micro level from the Portuguese LFS using these kinds of stimators. This paper discusses the advantages of using each approach and the viability of its implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de mestrado, Educação (Didáctica das Ciências), Universidade de Lisboa, Instituto de Educação, 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La question du pluralisme religieux est au Québec, l’objet de désaccords et de variations dans son mode de régulation et ses instruments d’action publique. La consultation publique sur le projet loi n° 94, Loi établissant les balises encadrant les demandes d’accommodement dans l’Administration gouvernementale et dans certains établissements, est au cœur de ceux-ci. En se basant sur l’analyse des séances d’auditions publiques en commission parlementaire qui ont eu lieu au Québec entre mai 2010 et janvier 2011 sur le projet de loi n° 94, cette thèse vise à interroger les enjeux liés à la publicisation des prises de positions et de l’échange d’arguments entre différents acteurs. À partir d’une méthodologie par théorisation enracinée et d’un cadre conceptuel qui se rattache à la communication publique, cette thèse cherche à mettre en évidence quelques-unes des propriétés des interactions verbales et non verbales qui composent et incarnent cette activité délibérative. Elle approche ces interactions du point de vue de leur publicisation en s’appuyant sur deux principes : la participation publique en tant qu’un instant de la construction du problème public et l’audition publique en commission parlementaire comme maillon d’un réseau dialogique qui participe à la publicisation du désaccord sur les accommodements raisonnables. Mettant l’accent sur l’usage du langage (verbal, non verbal et para verbal), l’objectif de cette thèse est de mieux comprendre comment des groupes minoritaires et majoritaires, engagés dans une arène publique où les points de vue par rapport aux accommodements raisonnables sont confrontés et mis en visibilité, gèrent leur situation de parole publique. La démarche de recherche a combiné deux stratégies d’analyse : la première stratégie d’inspiration conversationnelle, qui observe chaque séquence comme objet indépendant, a permis de saisir le déroulement des séances d’audition en respectant le caractère séquentiel des tours de parole La deuxième stratégie reviens sur les principaux résultats de l’analyse des séances d’auditions pour valider les résultats et parvenir à la saturation théorique pour élaborer une modélisation. L’exploitation des données selon cette approche qualitative a abouti au repérage de trois dynamiques. La première fait état des contraintes discursives. La seconde met en évidence le rôle des dimensions motivationnelles et socioculturelles dans la construction des positionnements et dans l’adoption d’un registre polémique. La troisième souligne la portée de la parole publique en termes d’actualisation des rapports de pouvoir et de confirmation de son caractère polémique. La modélisation proposée par cette thèse représente le registre polémique comme un élément constitutif de l’engagement argumentatif des acteurs sociaux mais qui est considérablement enchâssé dans d’autres éléments contextuels et motivationnels qui vont orienter sa portée. En tant qu’elle est exprimée dans un site dialogique, la parole publique en situation d’audition publique en commission parlementaire est en mesure de créer de nouvelles intrigues et d’une possibilité de coexister dans le dissensus. Le principal apport de cette thèse est qu’elle propose une articulation, concrète et originale entre une approche de la parole publique en tant que révélatrice d’autre chose que d’elle-même (nécessaire à tout éclaircissement des points de vue dans cette controverse) et une approche de la parole publique en tant que performance conduisant à la transformation du monde social. D’où, le titre de la thèse : la parole en action. Mots clefs : parole publique, discours, arène publique, pluralisme religieux, accommodements raisonnables, controverses, dissensus, théorisation enracinée

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans un monde de détection et prospection minière, une multitude de techniques existent. Une des techniques les plus puissantes est l’activation par neutrons (NAA et PGAA). Toutefois, dans le cadre de la prospection minière, l’utilisation de cette technique nécessite du forage. La motivation du projet GREYSTAR est de permettre l’analyse élémentaire par activation en limitant l’impact environnemental. Les facteurs limitant sont l’activation d’un volume distant et la détection de la radiation émise par ce volume. Ce mémoire examine l’activation par neutrons thermiques et la détection de gammas provenant de la désexcitation des noyaux activés. Une approche expérimentale est présentée avec des simulations pour venir appuyer les données expérimentales. Il en résulte que le projet GREYSTAR tel que décrit dans ce mémoire est prometteur et que davantage de recherche est à prescrire. Les résultats initiaux indiquent que selon le prototype proposé, les limites de détections sont de l’ordre de 2-3 m dans un matériel semblable au granite. On conclut que d’un point de vue de prospection minière, il est intéressant de poursuivre la recherche. De plus, plusieurs autres applications dans les domaines militaire, civil et policier sont prometteuses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Sociologia (Sociologia da Família, da Juventude e das Relações de Género), Universidade de Lisboa, Instituto de Ciências Sociais, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Sistemas Sustentáveis de Energia, Universidade de Lisboa, Faculdade de Ciências, 2016

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Educação de Lisboa para a obtenção de grau de Mestre em Ciências da Educação Especialização em Intervenção Precoce

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As estratégias de malevolência implicam que um indivíduo pague um custo para infligir um custo superior a um oponente. Como um dos comportamentos fundamentais da sociobiologia, a malevolência tem recebido menos atenção que os seus pares o egoísmo e a cooperação. Contudo, foi estabelecido que a malevolência é uma estratégia viável em populações pequenas quando usada contra indivíduos negativamente geneticamente relacionados pois este comportamento pode i) ser eliminado naturalmente, ou ii) manter-se em equilíbrio com estratégias cooperativas devido à disponibilidade da parte de indivíduos malevolentes de pagar um custo para punir. Esta tese propõe compreender se a propensão para a malevolência nos humanos é inerente ou se esta se desenvolve com a idade. Para esse efeito, considerei duas experiências de teoria de jogos em crianças em ambiente escolar com idades entre os 6 e os 22 anos. A primeira, um jogo 2x2 foi testada com duas variantes: 1) um prémio foi atribuído a ambos os jogadores, proporcionalmente aos pontos acumulados; 2), um prémio foi atribuído ao jogador com mais pontos. O jogo foi desenhado com o intuito de causar o seguinte dilema a cada jogador: i) maximizar o seu ganho e arriscar ter menos pontos que o adversário; ou ii) decidir não maximizar o seu ganho, garantindo que este não era inferior ao do seu adversário. A segunda experiência consistia num jogo do ditador com duas opções: uma escolha egoísta/altruísta (A), onde o ditador recebia mais ganho, mas o seu recipiente recebia mais que ele e uma escolha malevolente (B) que oferecia menos ganhos ao ditador que a A mas mais ganhos que o recipiente. O dilema era que se as crianças se comportassem de maneira egoísta, obtinham mais ganho para si, ao mesmo tempo que aumentavam o ganho do seu colega. Se fossem malevolentes, então prefeririam ter mais ganho que o seu colega ao mesmo tempo que tinham menos para eles próprios. As experiências foram efetuadas em escolas de duas áreas distintas de Portugal (continente e Açores) para perceber se as preferências malevolentes aumentavam ou diminuíam com a idade. Os resultados na primeira experiência sugerem que (1) os alunos compreenderam a primeira variante como um jogo de coordenação e comportaram-se como maximizadores, copiando as jogadas anteriores dos seus adversários; (2) que os alunos repetentes se comportaram preferencialmente como malevolentes, mais frequentemente que como maximizadores, com especial ênfase para os alunos de 14 anos; (3) maioria dos alunos comportou-se reciprocamente desde os 12 até aos 16 anos de idade, após os quais começaram a desenvolver uma maior tolerância às escolhas dos seus parceiros. Os resultados da segunda experiência sugerem que (1) as estratégias egoístas eram prevalentes até aos 6 anos de idade, (2) as tendências altruístas emergiram até aos 8 anos de idade e (3) as estratégias de malevolência começaram a emergir a partir dos 8 anos de idade. Estes resultados complementam a literatura relativamente escassa sobre malevolência e sugerem que este comportamento está intimamente ligado a preferências de consideração sobre os outros, o paroquialismo e os estágios de desenvolvimento das crianças.************************************************************Spite is defined as an act that causes loss of payoff to an opponent at a cost to the actor. As one of the four fundamental behaviours in sociobiology, it has received far less attention than its counterparts selfishness and cooperation. It has however been established as a viable strategy in small populations when used against negatively related individuals. Because of this, spite can either i) disappear or ii) remain at equilibrium with cooperative strategies due to the willingness of spiteful individuals to pay a cost in order to punish. This thesis sets out to understand whether propensity for spiteful behaviour is inherent or if it develops with age. For that effect, two game-theoretical experiments were performed with schoolboys and schoolgirls aged 6 to 22. The first, a 2 x 2 game, was tested in two variants: 1) a prize was awarded to both players, proportional to accumulated points; 2), a prize was given to the player with most points. Each player faced the following dilemma: i) to maximise pay-off risking a lower pay-off than the opponent; or ii) not to maximise pay-off in order to cut down the opponent below their own. The second game was a dictator experiment with two choices, (A) a selfish/altruistic choice affording more payoff to the donor than B, but more to the recipient than to the donor, and (B) a spiteful choice that afforded less payoff to the donor than A, but even lower payoff to the recipient. The dilemma here was that if subjects behaved selfishly, they obtained more payoff for themselves, while at the same time increasing their opponent payoff. If they were spiteful, they would rather have more payoff than their colleague, at the cost of less for themselves. Experiments were run in schools in two different areas in Portugal (mainland and Azores) to understand whether spiteful preferences varied with age. Results in the first experiment suggested that (1) students understood the first variant as a coordination game and engaged in maximising behaviour by copying their opponent’s plays; (2) repeating students preferentially engaged in spiteful behaviour more often than maximising behaviour, with special emphasis on 14 year-olds; (3) most students engaged in reciprocal behaviour from ages 12 to 16, as they began developing higher tolerance for their opponent choices. Results for the second experiment suggested that (1) selfish strategies were prevalent until the age of 6, (2) altruistic tendencies emerged since then, and (3) spiteful strategies began being chosen more often by 8 year-olds. These results add to the relatively scarce body of literature on spite and suggest that this type of behaviour is closely tied with other-regarding preferences, parochialism and the children’s stages of development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkimuksen tavoitteena oli tutkia langattomien internet palveluiden arvoverkkoa ja liiketoimintamalleja. Tutkimus oli luonteeltaan kvalitatiivinen ja siinä käytettiin strategiana konstruktiivista case-tutkimusta. Esimerkkipalveluna oli Treasure Hunters matkapuhelinpeli. Tutkimus muodostui teoreettisesta ja empiirisestä osasta. Teoriaosassa liitettiin innovaatio, liiketoimintamallit ja arvoverkko käsitteellisesti toisiinsa, sekä luotiin perusta liiketoimintamallien kehittämiselle. Empiirisessä osassa keskityttiin ensin liiketoimintamallien luomiseen kehitettyjen innovaatioiden pohjalta. Lopuksi pyrittiin määrittämään arvoverkko palvelun toteuttamiseksi. Tutkimusmenetelminä käytettiin innovaatiosessiota, haastatteluja ja lomakekyselyä. Tulosten pohjalta muodostettiin useita liiketoimintakonsepteja sekä kuvaus arvoverkon perusmallista langattomille peleille. Loppupäätelmänä todettiin että langattomat palvelut vaativat toteutuakseen useista toimijoista koostuvan arvoverkon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyber security is one of the main topics that are discussed around the world today. The threat is real, and it is unlikely to diminish. People, business, governments, and even armed forces are networked in a way or another. Thus, the cyber threat is also facing military networking. On the other hand, the concept of Network Centric Warfare sets high requirements for military tactical data communications and security. A challenging networking environment and cyber threats force us to consider new approaches to build security on the military communication systems. The purpose of this thesis is to develop a cyber security architecture for military networks, and to evaluate the designed architecture. The architecture is described as a technical functionality. As a new approach, the thesis introduces Cognitive Networks (CN) which are a theoretical concept to build more intelligent, dynamic and even secure communication networks. The cognitive networks are capable of observe the networking environment, make decisions for optimal performance and adapt its system parameter according to the decisions. As a result, the thesis presents a five-layer cyber security architecture that consists of security elements controlled by a cognitive process. The proposed architecture includes the infrastructure, services and application layers that are managed and controlled by the cognitive and management layers. The architecture defines the tasks of the security elements at a functional level without introducing any new protocols or algorithms. For evaluating two separated method were used. The first method is based on the SABSA framework that uses a layered approach to analyze overall security of an organization. The second method was a scenario based method in which a risk severity level is calculated. The evaluation results show that the proposed architecture fulfills the security requirements at least at a high level. However, the evaluation of the proposed architecture proved to be very challenging. Thus, the evaluation results must be considered very critically. The thesis proves the cognitive networks are a promising approach, and they provide lots of benefits when designing a cyber security architecture for the tactical military networks. However, many implementation problems exist, and several details must be considered and studied during the future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experiential Learning Instruments (ELls) are employed to modify the leamer's apprehension and / or comprehension in experiential learning situations, thereby improving the efficiency and effectiveness of those modalities in the learning process. They involve the learner in reciprocally interactive and determining transactions with his/her environment. Experiential Learning Instruments are used to keep experiential learning a process rather than an object. Their use is aimed at the continual refinement of the learner's knowledge and skill. Learning happens as the leamer's awareness, directed by the use of Ells, comes to experience, monitor and then use experiential feedback from living situations in a way that facilitates knmvledge/skill acquisition, self-correction and refinement. The thesis examined the literature relevant to the establishing of a theoretical experiential learning framework within which ELls can be understood. This framework included the concept that some learnings have intrinsic value-knowledge of necessary information-while others have instrumental value-knowledge of how to learn. The Kolb Learning Cycle and Kolb's six characteristics of experiential learning were used in analyzing three ELls from different fields of learning-saxophone tone production, body building and interpersonal communications. The ELls were examined to determine their learning objectives and how they work using experiential learning situations. It was noted that ELls do not transmit information but assist the learner in attending to and comprehending aspects of personal experience. Their function is to telescope the experiential learning process.