916 resultados para multiple discrepancies theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tese contém dois capítulos, cada um lidando com a teoria e a história dos bancos e arranjos financeiros. No capítulo 1, busca-se extender uma economia Diamond-Dybvig com monitoramento imperfeito dos saques antecipados e realizar uma comparação do bem estar social em cada uma das alocações possíveis, como proposto em Presscott and Weinberg(2003). Esse monitoramento imperfeito é implementado a partir da comunicação indireta ( através de um meio de pagamento) entre os agentes e a máquina de depósitos e saques que é um agregado do setor produtivo e financeiro. A extensão consiste em estudar alocações onde uma fração dos agentes pode explorar o monitoramento imperfeito e fraudar a alocação contratada ao consumirem mais cedo além do limite, usando múltiplos meios de pagamento. Com a punição limitada no período de consumo tardio, essa nova alocação pode ser chamada de uma alocação separadora em contraste com as alocações agregadoras onde o agente com habilidade de fraudar é bloqueado por um meio de pagamento imune a fraude, mas custoso, ou por receber consumo futuro suficiente para tornar a fraude desinteressante. A comparação de bem estar na gama de parâmetros escolhida mostra que as alocações separadoras são ótimas para as economias com menor dotação e as agregadoras para as de nível intermediário e as ricas. O capítulo termina com um possível contexto histórico para o modelo, o qual se conecta com a narrativa histórica encontrada no capítulo 2. No capítulo 2 são exploradas as propriedade quantitativas de um sistema de previsão antecedente para crises financeiras, com as váriaveis sendo escolhidas a partir de um arcabouço de ``boom and bust'' descrito mais detalhadamente no apêndice 1. As principais variáveis são: o crescimento real nos preços de imóveis e ações, o diferencial entre os juros dos títulos governamentais de 10 anos e a taxa de 3 meses no mercado inter-bancário e o crescimento nos ativos totais do setor bancário. Essas variáveis produzem uma taxa mais elevada de sinais corretos para as crises bancárias recentes (1984-2008) do que os sistemas de indicadores antecedentes comparáveis. Levar em conta um risco de base crescente ( devido à tendência de acumulação de distorções no sistema de preços relativos em expansões anteriores) também provê informação e eleva o número de sinais corretos em países que não passaram por uma expansão creditícia e nos preços de ativos tão vigorosa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brazil is under political and financial crises where the end seems far away. Because of that, researchers argue that the hotel rooms offered by Rio de Janeiro, built to host the Olympic Games 2016, will be difficult to occupy after the event. It is then necessary for the hotels to understand how guests perceive the service quality in order to adapt to this new era. If guests’ perceptions meet or exceed their expectations, they will be satisfied and will probably return. Thus based on the SERVQUAL approach, this paper aims to study the impact of the service dimensions on the guests’ overall satisfaction at hotels of Rio de Janeiro. Two hotels were considered representative of the city in terms of service quality and customers’ profile. Interviews to the hotel managers were performed, and questionnaires to the guests were administered. Among the five SERVQUAL dimensions – Reliability, Tangibles, Responsiveness, Assurance, and Empathy – the Empathy dimension appears to be the only one that affects the guests’ overall satisfaction. The study could also identify that gender, country of residence, home country and family income have an impact on guests’ satisfaction. This study has no intention of generalization, but rather of refining the theory about services and the SERVQUAL model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleep is beneficial to learning, but the underlying mechanisms remain controversial. The synaptic homeostasis hypothesis (SHY) proposes that the cognitive function of sleep is related to a generalized rescaling of synaptic weights to intermediate levels, due to a passive downregulation of plasticity mechanisms. A competing hypothesis proposes that the active upscaling and downscaling of synaptic weights during sleep embosses memories in circuits respectively activated or deactivated during prior waking experience, leading to memory changes beyond rescaling. Both theories have empirical support but the experimental designs underlying the conflicting studies are not congruent, therefore a consensus is yet to be reached. To advance this issue, we used real-time PCR and electrophysiological recordings to assess gene expression related to synaptic plasticity in the hippocampus and primary somatosensory cortex of rats exposed to novel objects, then kept awake (WK) for 60 min and finally killed after a 30 min period rich in WK, slow-wave sleep (SWS) or rapid-eye-movement sleep (REM). Animals similarly treated but not exposed to novel objects were used as controls. We found that the mRNA levels of Arc, Egr1, Fos, Ppp2ca and Ppp2r2d were significantly increased in the hippocampus of exposed animals allowed to enter REM, in comparison with control animals. Experience-dependent changes during sleep were not significant in the hippocampus for Bdnf, Camk4, Creb1, and Nr4a1, and no differences were detected between exposed and control SWS groups for any of the genes tested. No significant changes in gene expression were detected in the primary somatosensory cortex during sleep, in contrast with previous studies using longer post-stimulation intervals (>180 min). The experience-dependent induction of multiple plasticity-related genes in the hippocampus during early REM adds experimental support to the synaptic embossing theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By considering the long-wavelength limit of the regularized long wave (RLW) equation, we study its multiple-time higher-order evolution equations. As a first result, the equations of the Korteweg-de Vries hierarchy are shown to play a crucial role in providing a secularity-free perturbation theory in the specific case of a solitary-wave solution. Then, as a consequence, we show that the related perturbative series can be summed and gives exactly the solitary-wave solution of the RLW equation. Finally, some comments and considerations are made on the N-soliton solution, as well as on the limitations of applicability of the multiple-scale method in obtaining uniform perturbative series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent reports on the prevalence of multiple sclerosis (MS) have described discrepancies between the rates in cities in the northeastern and southeastern regions of Brazil, representing a north-south gradient. European immigrants settled in southeastern and southern Brazil at the beginning of the twentieth century. In this study, we report the frequency of European ancestors among Brazilian MS patients in four cities in the southern and southeastern regions of Brazil. Methods: A total of 652 consecutive patients with confirmed MS diagnoses seen at four centers in Belo Horizonte, Ribeirão Preto, Londrina and Santos were asked about the origin of their ancestors, going back three generations. Results: 287 (44%) reported Italian ancestry, 211 (32%) reported that all ancestors were born in Brazil, 49 (7.5%) had Portuguese ancestry and 70 (10%) had Spanish ancestry. The patients in Belo Horizonte and Londrina reported higher proportions of Italian ancestry than the proportions estimated for the populations of their respective States. Conclusion: Brazil has a north-south gradient of 0.91/100,000 per degree of latitude, which is higher than the gradient for Latin America. Since the largest immigrant group that settled in southern and southeastern Brazil was from Italy, it is possible that Italian immigration was one of the factors that have contributed toward increasing the prevalence of MS in these regions. © 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes the use of Artificial Intelligence (IA) techniques applied in cells of a manufacturing system. Machine Vision was used to identify pieces and their positions of two different products to be assembled in the same productive line. This information is given as input for an IA planner embedded in the manufacturing system. Therefore, initial and final states are sent automatically to the planner capable to generate assembly plans for a robotic cell, in real time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis work proposes a new physical equivalent circuit model for a recently proposed semiconductor transistor, a 2-drain MSET (Multiple State Electrostatically Formed Nanowire Transistor). It presents a new software-based experimental setup that has been developed for carrying out numerical simulations on the device and on equivalent circuits. As of 2015, we have already approached the scaling limits of the ubiquitous CMOS technology that has been in the forefront of mainstream technological advancement, so many researchers are exploring different ideas in the realm of electrical devices for logical applications, among them MSET transistors. The idea that underlies MSETs is that a single multiple-terminal device could replace many traditional transistors. In particular a 2-drain MSET is akin to a silicon multiplexer, consisting in a Junction FET with independent gates, but with a split drain, so that a voltage-controlled conductive path can connect either of the drains to the source. The first chapter of this work presents the theory of classical JFETs and its common equivalent circuit models. The physical model and its derivation are presented, the current state of equivalent circuits for the JFET is discussed. A physical model of a JFET with two independent gates has been developed, deriving it from previous results, and is presented at the end of the chapter. A review of the characteristics of MSET device is shown in chapter 2. In this chapter, the proposed physical model and its formulation are presented. A listing for the SPICE model was attached as an appendix at the end of this document. Chapter 3 concerns the results of the numerical simulations on the device. At first the research for a suitable geometry is discussed and then comparisons between results from finite-elements simulations and equivalent circuit runs are made. Where points of challenging divergence were found between the two numerical results, the relevant physical processes are discussed. In the fourth chapter the experimental setup is discussed. The GUI-based environments that allow to explore the four-dimensional solution space and to analyze the physical variables inside the device are described. It is shown how this software project has been structured to overcome technical challenges in structuring multiple simulations in sequence, and to provide for a flexible platform for future research in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.