897 resultados para students that use drugs
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Os fenómenos migratórios tem-se acentuado nas últimas décadas, para este facto muito tem contribuído a era globalizada em que nos encontramos. Com a globalização surgiu o desenvolvimento de redes mundiais, nomeadamente sistemas sociais e económicos, facultando o acesso a meios de transporte mais rápidos e baratos, e as comunicações tornaram-se mais fáceis, promovendo e facilitando as deslocações entre países e continentes, de pessoas e bens. Assim, actualmente com o acesso facilitado nas comunicações e nas deslocações, a migração tornou-se uma constante. Perante a quantidade de indivíduos oriundos das mais diversas partes do globo e com a evolução das tecnologias de informação e comunicação, as barreiras culturais foram reduzidas, permitindo uma rápida difusão de ideias e de meios, possibilitando que indivíduos que se encontram em locais geograficamente longínquos possam comunicar com regularidade. Este estudo teve como objectivo conhecer, por um lado, o impacto da utilização das TIC, por parte dos imigrantes de origem brasileira, como forma de manterem o contacto com o país de origem, por outro, o facto de residirem familiares/amigos em Portugal, aquando da sua vinda, podem contribuir para a sua integração. O estudo foi aplicado no Gabinete da família e no Gabinete do Imigrante, da Divisão de Acção Social Saúde e Juventude, da Câmara Municipal de Albufeira. Teve uma amostragem por conveniência que abrangeu 50 indivíduos oriundos do Brasil, residentes em Albufeira. Foi aplicado um inquérito por questionário que permitiu a caracterização da amostra e recolher informações sobre o impacto da utilização das TIC como forma de manter o contacto com o país de origem e a importância das redes sociais no processo de integração. Constituído por 74 questões fechadas, abertas e semi abertas. Fez-se de seguida uma análise quantitativa e qualitativa dos dados recolhidos. Os resultados obtidos sobre esta problemática permitiram percepcionar que o facto de existir uma rede familiar e de amigos, contribui para a integração da família imigrante na sociedade de acolhimento. Ou seja, todo o suporte familiar e a rede de amigos existente facultam ao imigrante condições para que o mesmo se instale no país de acolhimento e se sinta integrado. O facto de os imigrantes comunicarem, por norma, semanalmente com a família no Brasil, bem como utilizarem as tecnologias de informação e comunicação, contribui em muito para o seu bem estar psicológico e emocional. O contacto regular com a família, ainda que através de uma Web cam, proporciona-lhes tranquilidade e minimiza a distância física, uma vez que os podem ver e ouvir sentindo assim o apoio dos mesmos, ajudando-os a permanecer no nosso país. O uso das novas tecnologias são utilizadas para manter a ligação com a comunidade de origem, fomentando os laços familiares e com amigos. Os imigrantes utilizam a internet para comunicar com os indivíduos da mesma nacionalidade, bem como a família/amigos que se encontram no Brasil. O acesso às TIC e o facto de as utilizarem para comunicar com a família e amigos, no país de origem, pode ser considerada como uma rede social de apoio, proporcionando assim aos imigrantes uma integração no país de acolhimento.
Resumo:
The past four decades have witnessed an explosive growth in the field of networkbased facilitylocation modeling. This is not at all surprising since location policy is one of the mostprofitable areas of applied systems analysis in regional science and ample theoretical andapplied challenges are offered. Location-allocation models seek the location of facilitiesand/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or severalobjectives generally related to the efficiency of the system or to the allocation of resources.This paper concerns the location of facilities or services in discrete space or networks, thatare related to the public sector, such as emergency services (ambulances, fire stations, andpolice units), school systems and postal facilities. The paper is structured as follows: first,we will focus on public facility location models that use some type of coverage criterion,with special emphasis in emergency services. The second section will examine models based onthe P-Median problem and some of the issues faced by planners when implementing thisformulation in real world locational decisions. Finally, the last section will examine newtrends in public sector facility location modeling.
Resumo:
This paper tests the internal consistency of time trade-off utilities.We find significant violations of consistency in the direction predictedby loss aversion. The violations disappear for higher gauge durations.We show that loss aversion can also explain that for short gaugedurations time trade-off utilities exceed standard gamble utilities. Ourresults suggest that time trade-off measurements that use relativelyshort gauge durations, like the widely used EuroQol algorithm(Dolan 1997), are affected by loss aversion and lead to utilities thatare too high.
Resumo:
Ants are among the most common arthropods that colonize termite nests. The aim of this study was to identify the ant fauna associated to termite nests found in a cacao plantation in the county of Ilhéus, Bahia, Brazil, with emphasis on the fauna that uses the nests as foraging and/or nesting environment. For this purpose, 34 active, decadent and abandoned nests of Nasutitermes corniger, N. ephratae and Nasutitermes sp., with different volumes and degrees of activity, were dissected. A total of 54 ant species, belonging to 23 genera and five subfamilies, was found in the constructions. The active, decadent and abandoned termite nests presented, respectively, six, eight and 48 ant species. Crematogaster acuta and Ectatomma tuberculatum were the most frequent species in the active and decadent nests, respectively, while the most frequent species in the abandoned nests were Solenopsis pollux, Thaumatomyrmex contumax and Thaumatomyrmex sp. Twenty-six ant species had true colonies within the termitaria. The Formicidae species richness in the nests was inversely related to the degree of termite activity in the nests. The occurrence of living, decadent or abandoned termitaria of Nasutitermes spp. in cacao plantations foments the heterogeneity of habitats available in the plantations and favors the maintenance of high diversity of organisms that use obligatory or opportunistically this substrate.
Resumo:
Business cycles are both less volatile and more synchronized with the world cycle in rich countries than in poor ones. We develop two alternative explanations based on the idea that comparative advantage causes rich countries to specialize in industries that use new technologies operated by skilled workers, while poor countries specialize in industries that use traditional technologies operated by unskilled workers. Since new technologies are difficult to imitate, the industries of rich countries enjoy more market power and face more inelastic product demands than those of poor countries. Since skilled workers are less likely to exit employment as a result of changes in economic conditions, industries in rich countries face more inelastic labour supplies than those of poor countries. We show that either asymmetry in industry characteristics can generate cross-country differences in business cycles that resemble those we observe in the data.
Resumo:
We develop and estimate a structural model of inflation that allowsfor a fraction of firms that use a backward looking rule to setprices. The model nests the purely forward looking New KeynesianPhillips curve as a particular case. We use measures of marginalcosts as the relevant determinant of inflation, as the theorysuggests, instead of an ad-hoc output gap. Real marginal costsare a significant and quantitatively important determinant ofinflation. Backward looking price setting, while statisticallysignificant, is not quantitatively important. Thus, we concludethat the New Keynesian Phillips curve provides a good firstapproximation to the dynamics of inflation.
Resumo:
Spatial evaluation of Culicidae (Diptera) larvae from different breeding sites: application of a geospatial method and implications for vector control. This study investigates the spatial distribution of urban Culicidae and informs entomological monitoring of species that use artificial containers as larval habitats. Collections of mosquito larvae were conducted in the São Paulo State municipality of Santa Bárbara d' Oeste between 2004 and 2006 during house-to-house visits. A total of 1,891 samples and nine different species were sampled. Species distribution was assessed using the kriging statistical method by extrapolating municipal administrative divisions. The sampling method followed the norms of the municipal health services of the Ministry of Health and can thus be adopted by public health authorities in disease control and delimitation of risk areas. Moreover, this type of survey and analysis can be employed for entomological surveillance of urban vectors that use artificial containers as larval habitat.
Resumo:
This paper studies the equilibrating process of several implementationmechanisms using naive adaptive dynamics. We show that the dynamics convergeand are stable, for the canonical mechanism of implementation in Nash equilibrium.In this way we cast some doubt on the criticism of ``complexity'' commonlyused against this mechanism. For mechanisms that use more refined equilibrium concepts,the dynamics converge but are not stable. Some papers in the literatureon implementation with refined equilibrium concepts have claimed that themechanisms they propose are ``simple'' and implement ``everything'' (incontrast with the canonical mechanism). The fact that some of these ``simple''mechanisms have unstable equilibria suggests that these statements shouldbe interpreted with some caution.
Resumo:
When can a single variable be more accurate in binary choice than multiple sources of information? We derive analytically the probability that a single variable (SV) will correctly predict one of two choices when both criterion and predictor are continuous variables. We further provide analogous derivations for multiple regression (MR) and equal weighting (EW) and specify the conditions under which the models differ in expected predictive ability. Key factors include variability in cue validities, intercorrelation between predictors, and the ratio of predictors to observations in MR. Theory and simulations are used to illustrate the differential effects of these factors. Results directly address why and when one-reason decision making can be more effective than analyses that use more information. We thus provide analytical backing to intriguing empirical results that, to date, have lacked theoretical justification. There are predictable conditions for which one should expect less to be more.
Resumo:
Punishment of non-cooperators has been observed to promote cooperation. Such punishment is an evolutionary puzzle because it is costly to the punisher while beneficial to others, for example, through increased social cohesion. Recent studies have concluded that punishing strategies usually pay less than some non-punishing strategies. These findings suggest that punishment could not have directly evolved to promote cooperation. However, while it is well established that reputation plays a key role in human cooperation, the simple threat from a reputation of being a punisher may not have been sufficiently explored yet in order to explain the evolution of costly punishment. Here, we first show analytically that punishment can lead to long-term benefits if it influences one's reputation and thereby makes the punisher more likely to receive help in future interactions. Then, in computer simulations, we incorporate up to 40 more complex strategies that use different kinds of reputations (e.g. from generous actions), or strategies that not only include punitive behaviours directed towards defectors but also towards cooperators for example. Our findings demonstrate that punishment can directly evolve through a simple reputation system. We conclude that reputation is crucial for the evolution of punishment by making a punisher more likely to receive help in future interactions, and that experiments investigating the beneficial effects of punishment in humans should include reputation as an explicit feature.
Resumo:
OBJECTIVE: Diaphragmatic navigators are frequently used in free-breathing coronary MR angiography, either to gate or prospectively correct slice position or both. For such approaches, a constant relationship between coronary and diaphragmatic displacement throughout the respiratory cycle is assumed. The purpose of this study was to evaluate the relationship between diaphragmatic and coronary artery motion during free breathing. SUBJECTS AND METHODS: A real-time echoplanar MR imaging sequence was used in 12 healthy volunteers to obtain 30 successive images each (one per cardiac cycle) that included the left main coronary artery and the domes of both hemidiaphragms. The coronary artery and diaphragm positions (relative to isocenter) were determined and analyzed for effective diaphragmatic gating windows of 3, 5, and 7 mm (diaphragmatic excursions of 0-3, 0-5, and 0-7 mm from the end-expiratory position, respectively). RESULTS: Although the mean slope correlating the displacement of the right diaphragm and the left main coronary artery was approximately 0.6 for all diaphragmatic gating windows, we also found great variability among individual volunteers. Linear regression slopes varied from 0.17 to 0.93, and r2 values varied from .04 to .87. CONCLUSION: Wide individual variability exists in the relationship between coronary and diaphragmatic respiratory motion during free breathing. Accordingly, coronary MR angiographic approaches that use diaphragmatic navigator position for prospective slice correction may benefit from patient-specific correction factors. Alternatively, coronary MR angiography may benefit from a more direct assessment of the respiratory displacement of the heart and coronary arteries, using left ventricular navigators.
Resumo:
The interest in solar ultraviolet (UV) radiation from the scientific community and the general population has risen significantly in recent years because of the link between increased UV levels at the Earth's surface and depletion of ozone in the stratosphere. As a consequence of recent research, UV radiation climatologies have been developed, and effects of some atmospheric constituents (such as ozone or aerosols) have been studied broadly. Correspondingly, there are well-established relationships between, for example, total ozone column and UV radiation levels at the Earth's surface. Effects of clouds, however, are not so well described, given the intrinsic difficulties in properly describing cloud characteristics. Nevertheless, the effect of clouds cannot be neglected, and the variability that clouds induce on UV radiation is particularly significant when short timescales are involved. In this review we show, summarize, and compare several works that deal with the effect of clouds on UV radiation. Specifically, works reviewed here approach the issue from the empirical point of view: Some relationship between measured UV radiation in cloudy conditions and cloud-related information is given in each work. Basically, there are two groups of methods: techniques that are based on observations of cloudiness (either from human observers or by using devices such as sky cameras) and techniques that use measurements of broadband solar radiation as a surrogate for cloud observations. Some techniques combine both types of information. Comparison of results from different works is addressed through using the cloud modification factor (CMF) defined as the ratio between measured UV radiation in a cloudy sky and calculated radiation for a cloudless sky. Typical CMF values for overcast skies range from 0.3 to 0.7, depending both on cloud type and characteristics. Despite this large dispersion of values corresponding to the same cloud cover, it is clear that the cloud effect on UV radiation is 15–45% lower than the cloud effect on total solar radiation. The cloud effect is usually a reducing effect, but a significant number of works report an enhancement effect (that is increased UV radiation levels at the surface) due to the presence of clouds. The review concludes with some recommendations for future studies aimed to further analyze the cloud effects on UV radiation
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.