950 resultados para Non-Linear Analysis
Resumo:
It is well known that meteorological conditions influence the comfort and human health. Southern European countries, including Portugal, show the highest mortality rates during winter, but the effects of extreme cold temperatures in Portugal have never been estimated. The objective of this study was the estimation of the effect of extreme cold temperatures on the risk of death in Lisbon and Oporto, aiming the production of scientific evidence for the development of a real-time health warning system. Poisson regression models combined with distributed lag non-linear models were applied to assess the exposure-response relation and lag patterns of the association between minimum temperature and all-causes mortality and between minimum temperature and circulatory and respiratory system diseases mortality from 1992 to 2012, stratified by age, for the period from November to March. The analysis was adjusted for over dispersion and population size, for the confounding effect of influenza epidemics and controlled for long-term trend, seasonality and day of the week. Results showed that the effect of cold temperatures in mortality was not immediate, presenting a 1–2-day delay, reaching maximumincreased risk of death after 6–7 days and lasting up to 20–28 days. The overall effect was generally higher and more persistent in Lisbon than in Oporto, particularly for circulatory and respiratory mortality and for the elderly. Exposure to cold temperatures is an important public health problem for a relevant part of the Portuguese population, in particular in Lisbon.
Resumo:
Maximum entropy spectral analyses and a fitting test to find the best suitable curve for the modified time series based on the non-linear least squares method for Td (diatom temperature) values were performed for the Quaternary portion of the DSDP Sites 579 and 580 in the western North Pacific. The sampling interval averages 13.7 kyr in the Brunhes Chron (0-780 ka) and 16.5 kyr in the later portion of the Matuyama Chron (780-1800 ka) at Site 580, but increases to 17.3 kyr and 23.2 kyr, respectively, at Site 579. Among dominant cycles during the Brunhes Chron, there are 411.5 kyr and 126.0 kyr at Site 579, and 467.0 kyr and 136.7 kyr at Site 580 correspond to 413 kyr and 95 to 124 kyr of the orbital eccentricity. Minor cycles of 41.2 kyr at Site 579 and 41.7 kyr at Site 580 are near to 41 kyr of the obliquity (tilt). During the Matuyama Chron at Site 580, cycles of 49.7 kyr and 43.6 kyr are dominant. The surface-water temperature estimated from diatoms at the western North Pacific DSDP Sites 579 and 580 shows correlation with the fundamental Earth's orbital parameters during Quaternary time.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The compelling quality of the Global Change simulation study (Altemeyer, 2003), in which high RWA (right-wing authoritarianism)/high SDO (social dominance orientation) individuals produced poor outcomes for the planet, rests on the inference that the link between high RWA/SDO scores and disaster in the simulation can be generalized to real environmental and social situations. However, we argue that studies of the Person × Situation interaction are biased to overestimate the role of the individual variability. When variables are operationalized, strongly normative items are excluded because they are skewed and kurtotic. This occurs both in the measurement of predictor constructs, such as RWA, and in the outcome constructs, such as prejudice and war. Analyses of normal linear statistics highlight personality variables such as RWA, which produce variance, and overlook the role of norms, which produce invariance. Where both normative and personality forces are operating, as in intergroup contexts, the linear analysis generates statistics for the sample that disproportionately reflect the behavior of the deviant, antinormative minority and direct attention away from the baseline, normative position. The implications of these findings for the link between high RWA and disaster are discussed.
Resumo:
Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.
Resumo:
Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state a alpha/0 > + beta/1 > can be prepared deterministically.
Resumo:
Defining the pharmacokinetics of drugs in overdose is complicated. Deliberate self-poisoning is generally impulsive and associated with poor accuracy in dose history. In addition, early blood samples are rarely collected to characterize the whole plasma-concentration time profile and the effect of decontamination on the pharmacokinetics is uncertain. The aim of this study was to explore a fully Bayesian methodology for population pharmacokinetic analysis of data that arose from deliberate self-poisoning with citalopram. Prior information on the pharmacokinetic parameters was elicited from 14 published studies on citalopram when taken in therapeutic doses. The data set included concentration-time data from 53 patients studied after 63 citalopram overdose events (dose range: 20-1700 mg). Activated charcoal was administered between 0.5 and 4 h after 17 overdose events. The clinical investigator graded the veracity of the patients' dosing history on a 5-point ordinal scale. Inclusion of informative priors stabilised the pharmacokinetic model and the population mean values could be estimated well. There were no indications of non-linear clearance after excessive doses. The final model included an estimated uncertainty of the dose amount which in a simulation study was shown to not affect the model's ability to characterise the effects of activated charcoal. The effect of activated charcoal on clearance and bioavailability was pronounced and resulted in a 72% increase and 22% decrease, respectively. These findings suggest charcoal administration is potentially beneficial after citalopram overdose. The methodology explored seems promising for exploring the dose-exposure relationship in the toxicological settings.
Resumo:
In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non-linear constraints.
Resumo:
We present the first experimental observation of several bifurcations in a controllable non-linear Hamiltonian system. Dynamics of cold atoms are used to test predictions of non-linear, non-dissipative Hamiltonian dynamics.
Resumo:
La presente Tesi ha per oggetto lo sviluppo e la validazione di nuovi criteri per la verifica a fatica multiassiale di componenti strutturali metallici . In particolare, i nuovi criteri formulati risultano applicabili a componenti metallici, soggetti ad un’ampia gamma di configurazioni di carico: carichi multiassiali variabili nel tempo, in modo ciclico e random, per alto e basso/medio numero di cicli di carico. Tali criteri costituiscono un utile strumento nell’ambito della valutazione della resistenza/vita a fatica di elementi strutturali metallici, essendo di semplice implementazione, e richiedendo tempi di calcolo piuttosto modesti. Nel primo Capitolo vengono presentate le problematiche relative alla fatica multiassiale, introducendo alcuni aspetti teorici utili a descrivere il meccanismo di danneggiamento a fatica (propagazione della fessura e frattura finale) di componenti strutturali metallici soggetti a carichi variabili nel tempo. Vengono poi presentati i diversi approcci disponibili in letteratura per la verifica a fatica multiassiale di tali componenti, con particolare attenzione all'approccio del piano critico. Infine, vengono definite le grandezze ingegneristiche correlate al piano critico, utilizzate nella progettazione a fatica in presenza di carichi multiassiali ciclici per alto e basso/medio numero di cicli di carico. Il secondo Capitolo è dedicato allo sviluppo di un nuovo criterio per la valutazione della resistenza a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato in termini di tensioni. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2011. In particolare, il primo intervento riguarda la determinazione della giacitura del piano critico: nuove espressioni dell'angolo che lega la giacitura del piano critico a quella del piano di frattura vengono implementate nell'algoritmo del criterio. Il secondo intervento è relativo alla definizione dell'ampiezza della tensione tangenziale e un nuovo metodo, noto come Prismatic Hull (PH) method (di Araújo e collaboratori), viene implementato nell'algoritmo. L'affidabilità del criterio viene poi verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura. Nel terzo Capitolo viene proposto un criterio di nuova formulazione per la valutazione della vita a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e basso/medio numero di cicli. Il criterio risulta basato sull'approccio del piano critico, ed è formulato in termini di deformazioni. In particolare, la formulazione proposta trae spunto, come impostazione generale, dal criterio di fatica multiassiale in regime di alto numero di cicli discusso nel secondo Capitolo. Poiché in presenza di deformazioni plastiche significative (come quelle caratterizzanti la fatica per basso/medio numero di cicli di carico) è necessario conoscere il valore del coefficiente efficace di Poisson del materiale, vengono impiegate tre differenti strategie. In particolare, tale coefficiente viene calcolato sia per via analitica, che per via numerica, che impiegando un valore costante frequentemente adottato in letteratura. Successivamente, per validarne l'affidabilità vengono impiegati numerosi dati di prove sperimentali disponibili in letteratura; i risultati numerici sono ottenuti al variare del valore del coefficiente efficace di Poisson. Inoltre, al fine di considerare i significativi gradienti tensionali che si verificano in presenza di discontinuità geometriche, come gli intagli, il criterio viene anche esteso al caso dei componenti strutturali intagliati. Il criterio, riformulato implementando il concetto del volume di controllo proposto da Lazzarin e collaboratori, viene utilizzato per stimare la vita a fatica di provini con un severo intaglio a V, realizzati in lega di titanio grado 5. Il quarto Capitolo è rivolto allo sviluppo di un nuovo criterio per la valutazione del danno a fatica di elementi strutturali metallici soggetti a carichi multiassiali random e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato nel dominio della frequenza. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2014. In particolare, l’intervento riguarda la determinazione della giacitura del piano critico, e nuove espressioni dell'angolo che lega la giacitura del piano critico con quella del piano di frattura vengono implementate nell'algoritmo del criterio. Infine, l’affidabilità del criterio viene verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura.
Resumo:
Esta pesquisa reflete sobre questões da ética contemporânea na publicidade dirigida ao público feminino. A discussão de tais questões debruça sobre a vertente deontológica (convicção). O objetivo do estudo é investigar como os anúncios publicados nas revistas Claudia e Nova articulam questões de tal ética. Assim, buscou-se verificar, por meio da análise de conteúdo, se os anúncios seguiam os princípios contidos no Código Brasileiro de Auto-Regulamentação Publicitária. Em uma segunda etapa, pretendeu-se investigar, por meio da análise de discurso, como se deu a construção dos anúncios sob o enfoque da ética e da mulher na sociedade dos dias de hoje. Concluiu-se que as representações da ética deontológica na publicidade feminina ocorrem de maneira não linear e fragmentada. A não linearidade se refere ao não cumprimento dos princípios éticos por parte de alguns anúncios analisados. Já a fragmentação diz respeito ao modo como a mulher é retratada e como os produtos são divulgados nos anúncios, a partir de diferentes padrões de conduta (princípios) e baseados em valores diversificados. Ora os anúncios apresentam os produtos de maneira verdadeira ou não, ora as mulheres aparecem sob um enfoque baseado em valores contemporâneos ou em valores tradicionais de modo diferenciado.(AU)
Resumo:
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.