830 resultados para In-yer-face
Resumo:
Understanding fluctuations in tropical cyclone activity along United States shores and abroad becomes increasingly important as coastal managers and planners seek to save lives, mitigate damage, and plan for resilience in the face of changing storminess and sea-level rise. Tropical cyclone activity has long been of concern to coastal areas as they bring strong winds, heavy rains, and high seas. Given projections of a warming climate, current estimates suggest that not only will tropical cyclones increase in frequency, but also in intensity (maximum sustained winds and minimum central pressures). An understanding of what has happened historically is an important step in identifying potential future changes in tropical cyclone frequency and intensity. The ability to detect such changes depends on a consistent and reliable global tropical cyclone dataset. Until recently no central repository for historical tropical cyclone data existed. To fill this need, the International Best Track Archive for Climate Stewardship (IBTrACS) dataset was developed to collect all known global historical tropical cyclone data into a single source for dissemination. With this dataset, a global examination of changes in tropical cyclone frequency and intensity can be performed. Caveats apply to any historical tropical cyclone analysis however, as the data contributed to the IBTrACS archive from various tropical cyclone warning centers is still replete with biases that may stem from operational changes, inhomogeneous monitoring programs, and time discontinuities. A detailed discussion of the difficulties in detecting trends using tropical cyclone data can be found in Landsea et al. 2006. The following sections use the IBTrACS dataset to show the global spatial variability of tropical cyclone frequency and intensity. Analyses will show where the strongest storms typically occur, the regions with the highest number of tropical cyclones per decade, and the locations of highest average maximum wind speeds. (PDF contains 3 pages)
Resumo:
According to the Millennium Ecosystem Assessment’s chapter “Coastal Systems” (Agardy and Alder 2005), 40% of the world population falls within 100 km of the coast. Agardy and Alder report that population densities in coastal regions are three times those of inland regions and demographic forecasts suggest a continued rise in coastal populations. These high population levels can be partially traced to the abundance of ecosystem services provided in the coastal zone. While populations benefit from an abundance of services, population pressure also degrades existing services and leads to increased susceptibility of property and human life to natural hazards. In the face of these challenges, environmental administrators on the coast must pursue agendas which reflect the difficult balance between private and public interests. These decisions include maintaining economic prosperity and personal freedoms, protecting or enhancing the existing flow of ecosystem services to society, and mitigating potential losses from natural hazards. (PDF contains 5 pages)
Resumo:
Rising global temperatures threaten the survival of many plant and animal species. Having already risen at an unprecedented rate in the past century, temperatures are predicted to rise between 0.3 and 7.5C in North America over the next 100 years (Hawkes et al. 2007). Studies have documented the effects of climate warming on phenology (timing of seasonal activities), with observations of early arrival at breeding grounds, earlier ends to the reproductive season, and delayed autumnal migrations (Pike et al. 2006). In addition, for species not suited to the physiological demands of cold winter temperatures, increasing temperatures could shift tolerable habitats to higher latitudes (Hawkes et al. 2007). More directly, climate warming will impact thermally sensitive species like sea turtles, who exhibit temperature-dependent sexual determination. Temperatures in the middle third of the incubation period determine the sex of sea turtle offspring, with higher temperatures resulting in a greater abundance of female offspring. Consequently, increasing temperatures from climate warming would drastically change the offspring sex ratio (Hawkes et al. 2007). Of the seven extant species of sea turtles, three (leatherback, Kemp’s ridley, and hawksbill) are critically endangered, two (olive ridley and green) are endangered, and one (loggerhead) is threatened. Considering the predicted scenarios of climate warming and the already tenuous status of sea turtle populations, it is essential that efforts are made to understand how increasing temperatures may affect sea turtle populations and how these species might adapt in the face of such changes. In this analysis, I seek to identify the impact of changing climate conditions over the next 50 years on the availability of sea turtle nesting habitat in Florida given predicted changes in temperature and precipitation. I predict that future conditions in Florida will be less suitable for sea turtle nesting during the historic nesting season. This may imply that sea turtles will nest at a different time of year, in more northern latitudes, to a lesser extent, or possibly not at all. It seems likely that changes in temperature and precipitation patterns will alter the distribution of sea turtle nesting locations worldwide, provided that beaches where the conditions are suitable for nesting still exist. Hijmans and Graham (2006) evaluate a range of climate envelope models in terms of their ability to predict species distributions under climate change scenarios. Their results suggested that the choice of species distribution model is dependent on the specifics of each individual study. Fuller et al. (2008) used a maximum entropy approach to model the potential distribution of 11 species in the Arctic Coastal Plain of Alaska under a series of projected climate scenarios. Recently, Pike (in press) developed Maxent models to investigate the impacts of climate change on green sea turtle nest distribution and timing. In each of these studies, a set of environmental predictor variables (including climate variables), for which ‘current’ conditions are available and ‘future’ conditions have been projected, is used in conjunction with species occurrence data to map potential species distribution under the projected conditions. In this study, I will take a similar approach in mapping the potential sea turtle nesting habitat in Florida by developing a Maxent model based on environmental and climate data and projecting the model for future climate data. (PDF contains 5 pages)
Resumo:
Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.
Resumo:
The visual system is a remarkable platform that evolved to solve difficult computational problems such as detection, recognition, and classification of objects. Of great interest is the face-processing network, a sub-system buried deep in the temporal lobe, dedicated for analyzing specific type of objects (faces). In this thesis, I focus on the problem of face detection by the face-processing network. Insights obtained from years of developing computer-vision algorithms to solve this task have suggested that it may be efficiently and effectively solved by detection and integration of local contrast features. Does the brain use a similar strategy? To answer this question, I embark on a journey that takes me through the development and optimization of dedicated tools for targeting and perturbing deep brain structures. Data collected using MR-guided electrophysiology in early face-processing regions was found to have strong selectivity for contrast features, similar to ones used by artificial systems. While individual cells were tuned for only a small subset of features, the population as a whole encoded the full spectrum of features that are predictive to the presence of a face in an image. Together with additional evidence, my results suggest a possible computational mechanism for face detection in early face processing regions. To move from correlation to causation, I focus on adopting an emergent technology for perturbing brain activity using light: optogenetics. While this technique has the potential to overcome problems associated with the de-facto way of brain stimulation (electrical microstimulation), many open questions remain about its applicability and effectiveness for perturbing the non-human primate (NHP) brain. In a set of experiments, I use viral vectors to deliver genetically encoded optogenetic constructs to the frontal eye field and faceselective regions in NHP and examine their effects side-by-side with electrical microstimulation to assess their effectiveness in perturbing neural activity as well as behavior. Results suggest that cells are robustly and strongly modulated upon light delivery and that such perturbation can modulate and even initiate motor behavior, thus, paving the way for future explorations that may apply these tools to study connectivity and information flow in the face processing network.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
Esta tese trata da relação entre violência, sofrimento, fotografia e memória, a partir do noticiário de violência na cidade do Rio de Janeiro e da participação dos familiares de vítimas de casos noticiados em movimentos contra a violência. Para compreender esse universo, descrevo e analiso os discursos textuais, visuais e emocionais dos familiares de vítimas e, também, dos fotojornalistas. As notícias de violência, segundo Luc Boltanski, são uma forma de denúncia e de conversão dos casos individuais em causas coletivas. Essas são tomadas como um primeiro registro da violência que se transforma em um lugar de memória desses acontecimentos na cidade. A partir de notícias e histórias narradas pelos entrevistados foram construídos pequenos quadros de memória que contam o processo vivido pelos familiares após a violência. Esse processo iniciado por uma violência original se converte, ao longo do tempo, em lutas individuais e coletivas. O tempo torna-se um agente que trabalha nas relações, nas emoções e na memória. Ele transforma os sentidos da experiência violenta e constrói a identidade de familiar de vítima e as relações entre eles, moldando comunidades emocionais. Essas comunidades apóiam os familiares em seu restabelecimento emocional e social e na luta para conquistar o direito de justiça. Diante da morte violenta, essas lutas agenciam o surgimento de novas violências e a chegada de novos familiares de vítimas em meio às memórias individuais e coletivas.
Resumo:
Objetiva-se reconstruir o sentido e o alcance do princípio do ne bis in idem, estudando-se as interferências recíprocas do direito penal e do direito administrativo sancionador, com ênfase na concorrência normativa entre tais manifestações do ius puniendi do Estado, seus desdobramentos e os riscos que representam para a liberdade humana, especificamente em face da interdição de duplicidade ou multiplicidade punitiva encartada no princípio em comento. Estrutura-se o texto em três pilares: a primeira parte cuida dos aspectos mais universais do princípio do ne bis in idem, percorrendo tanto seu traçado histórico como seu reconhecimento internacional; a segunda parte examina a consistente experiência jurídica europeia, analisando os marcos teóricos e práticos relacionados à matéria; finalmente, a terceira parte atinge o âmago da investigação, enfocando teoricamente o princípio do ne bis in idem, de modo a renovar sua interpretação no plano nacional, redimensionando as convergências entre o direito penal e o direito administrativo sancionador, a unicidade da (re)ação repressiva do Estado e as possibilidades de enfrentamento das disfunções desse princípio no direito brasileiro. Demonstra-se que a acumulação de sanções de caráter punitivo, de natureza penal e/ou administrativa sancionadora, quando presentes os pressupostos de identidade de sujeito, de fatos e de fundamentos, é vedada pelo espectro de proteção do princípio do ne bis in idem. Postula-se, ainda, esclarecer se, nas situações de exacerbação punitiva com fins semelhantes ou confluentes, deverá sempre prevalecer a aplicação da lei penal. Espera-se, ademais, formular propostas para a regulamentação de conflitos nos casos de concorrência normativa entre o direito penal e o direito administrativo sancionador. Evidencia-se, enfim, que o objetivo principal da investigação é a plena compreensão do princípio do ne bis in idem, refletindo-se a respeito da ilegitimidade da acumulação de sanções penais e sanções administrativas, tão somente pelo fundamento de que não é possível a desvinculação das regras de independência entre a competência jurisdicional e a atribuição sancionadora da administração ou em razão de supostas indiferenças ontológicas entre os ilícitos penal e administrativo.
Resumo:
Consensus, compromise and cooperation. That was how more than 100 fishers reached agreement on how they would manage their own fishery in a small reservoir in northeastern Brazil. The long hard road that led to the agreement, the final congress in which fishers made minor history and the lessons that others may draw from the experience are described in this article. The fishers agreed on a nonfishing period of protected areas and a seasonal ban on certain nets in the face of a government department that told them the measures were non-building and essentially illegal.
Resumo:
Aquatic agricultural systems (AAS) are diverse production and livelihood systems where families cultivate a range of crops, raise livestock, farm or catch fish, gather fruits and other tree crops, and harness natural resources such as timber, reeds, and wildlife. Aquatic agricultural systems occur along freshwater floodplains, coastal deltas, and inshore marine waters, and are characterized by dependence on seasonal changes in productivity, driven by seasonal variation in rainfall, river flow, and/or coastal and marine processes. Despite this natural productivity, the farming, fishing, and herding communities who live in these systems are among the poorest and most vulnerable in their countries and regions. This report provides an overview of the scale and scope of development challenges in coastal aquatic agricultural systems, their significance for poor and vulnerable communities, and the opportunities for partnership and investment that support efforts of these communities to secure resilient livelihoods in the face of multiple risks.
Resumo:
In this report we have attempted to evaluate the ecological and economic consequences of hypoxia in the northern Gulf of Mexico. Although our initial approach was to rely on published accounts, we quickly realized that the body of published literature deahng with hypoxia was limited, and we would have to conduct our own exploratory analysis of existing Gulf data, or rely on published accounts from other systems to infer possible or potential effects of hypoxia. For the economic analysis, we developed a conceptual model of how hypoxia-related impacts could affect fisheries. Our model included both supply and demand components. The supply model had two components: (1) a physical production function for fish or shrimp, and (2) the cost of fishing. If hypoxia causes the cost of a unit of fishing effort to change, then this will result in a shift in supply. The demand model considered how hypoxia might affect the quality of landed fish or shrimp. In particular, the market value per pound is lower for small shrimp than for large shrimp. Given the limitations of the ecological assessment, the shallow continental shelf area affected by hypoxia does show signs of hypoxia-related stress. While current ecological conditions are a response to a variety of stressors, the effects of hypoxia are most obvious in the benthos that experience mortality, elimination of larger long-lived species, and a shifting of productivity to nonhypoxic periods (energy pulsing). What is not known is whether hypoxia leads to higher productivity during productive periods, or simply to a reduction of productivity during oxygen-stressed periods. The economic assessment based on fisheries data, however, failed to detect effects attributable to hypoxia. Overall, fisheries landings statistics for at least the last few decades have been relatively constant. The failure to identify clear hypoxic effects in the fisheries statistics does not necessarily mean that they are absent. There are several possibilities: (1) hypoxic effects are small relative to the overall variability in the data sets evaluated; (2) the data and the power of the analyses are not adequate; and (3) currently there are no hypoxic effects on fisheries. Lack of identified hypoxic effects in available fisheries data does not imply that effects would not occur should conditions worsen. Experience with other hypoxic zones around the globe shows that both ecological and fisheries effects become progressively more severe as hypoxia increases. Several large systems around the globe have suffered serious ecological and economic consequences from seasonal summertime hypoxia; most notable are the Kattegat and Black Sea. The consequences range from localized loss of catch and recruitment failure to complete system-wide loss of fishery species. If experiences in other systems are applicable to the Gulf of Mexico, then in the face of worsening hypoxic conditions, at some point fisheries and other species will decline, perhaps precipitously.
Resumo:
We present a gradient-based motion capture system that robustly tracks a human hand, based on abstracted visual information - silhouettes. Despite the ambiguity in the visual data and despite the vulnerability of gradient-based methods in the face of such ambiguity, we minimise problems related to misfit by using a model of the hand's physiology, which is entirely non-visual, subject-invariant, and assumed to be known a priori. By modelling seven distinct aspects of the hand's physiology we derive prior densities which are incorporated into the tracking system within a Bayesian framework. We demonstrate how the posterior is formed, and how our formulation leads to the extraction of the maximum a posteriori estimate using a gradient-based search. Our results demonstrate an enormous improvement in tracking precision and reliability, while also achieving near real-time performance. © 2009 IEEE.
Resumo:
In the face of increasing demand and limited emission reduction opportunities, the steel industry will have to look beyond its process emissions to bear its share of emission reduction targets. One option is to improve material efficiency - reducing the amount of metal required to meet services. In this context, the purpose of this paper is to explore why opportunities to improve material efficiency through upstream measures such as yield improvement and lightweighting might remain underexploited by industry. Established input-output techniques are applied to the GTAP 7 multi-regional input-output model to quantify the incentives for companies in key steel-using sectors (such as property developers and automotive companies) to seek opportunities to improve material efficiency in their upstream supply chains under different short-run carbon price scenarios. Because of the underlying assumptions, the incentives are interpreted as overestimates. The principal result of the paper is that these generous estimates of the incentives for material efficiency caused by a carbon price are offset by the disincentives to material efficiency caused by labour taxes. Reliance on a carbon price alone to deliver material efficiency would therefore be misguided and additional policy interventions to support material efficiency should be considered. © 2013 Elsevier B.V.
Resumo:
Background: Bradykinesia is a cardinal feature of Parkinson's disease (PD). Despite its disabling impact, the precise cause of this symptom remains elusive. Recent thinking suggests that bradykinesia may be more than simply a manifestation of motor slowness, and may in part reflect a specific deficit in the operation of motivational vigour in the striatum. In this paper we test the hypothesis that movement time in PD can be modulated by the specific nature of the motivational salience of possible action-outcomes. Methodology/Principal Findings: We developed a novel movement time paradigm involving winnable rewards and avoidable painful electrical stimuli. The faster the subjects performed an action the more likely they were to win money (in appetitive blocks) or to avoid a painful shock (in aversive blocks). We compared PD patients when OFF dopaminergic medication with controls. Our key finding is that PD patients OFF dopaminergic medication move faster to avoid aversive outcomes (painful electric shocks) than to reap rewarding outcomes (winning money) and, unlike controls, do not speed up in the current trial having failed to win money in the previous one. We also demonstrate that sensitivity to distracting stimuli is valence specific. Conclusions/Significance: We suggest this pattern of results can be explained in terms of low dopamine levels in the Parkinsonian state leading to an insensitivity to appetitive outcomes, and thus an inability to modulate movement speed in the face of rewards. By comparison, sensitivity to aversive stimuli is relatively spared. Our findings point to a rarely described property of bradykinesia in PD, namely its selective regulation by everyday outcomes. © 2012 Shiner et al.
Resumo:
People are alarmingly susceptible to manipulations that change both their expectations and experience of the value of goods. Recent studies in behavioral economics suggest such variability reflects more than mere caprice. People commonly judge options and prices in relative terms, rather than absolutely, and display strong sensitivity to exemplar and price anchors. We propose that these findings elucidate important principles about reward processing in the brain. In particular, relative valuation may be a natural consequence of adaptive coding of neuronal firing to optimise sensitivity across large ranges of value. Furthermore, the initial apparent arbitrariness of value may reflect the brains' attempts to optimally integrate diverse sources of value-relevant information in the face of perceived uncertainty. Recent findings in neuroscience support both accounts, and implicate regions in the orbitofrontal cortex, striatum, and ventromedial prefrontal cortex in the construction of value.