866 resultados para Uncertainty and disturbance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

La energía transportada por el oleaje a través de los océanos (energía undimotriz) se enmarca dentro de las denominadas energías oceánicas. Su aprovechamiento para generar energía eléctrica (o ser aprovechada de alguna otra forma) es una idea reflejada ya hace más de dos siglos en una patente (1799). Desde entonces, y con especial intensidad desde los años 70, ha venido despertando el interés de instituciones ligadas al I+D+i y empresas del sector energético y tecnológico, debido principalmente a la magnitud del recurso disponible. Actualmente se puede considerar al sector en un estado precomercial, con un amplio rango de dispositivos y tecnologías en diferente grado de desarrollo en los que ninguno destaca sobre los otros (ni ha demostrado su viabilidad económica), y sin que se aprecie una tendencia a converger un único dispositivo (o un número reducido de ellos). El recurso energético que se está tratando de aprovechar, pese a compartir la característica de no-controlabilidad con otras fuentes de energía renovable como la eólica o la solar, presenta una variabilidad adicional. De esta manera, diferentes localizaciones, pese a poder presentar recursos de contenido energético similar, presentan oleajes de características muy diferentes en términos de alturas y periodos de oleaje, y en la dispersión estadística de estos valores. Esta variabilidad en el oleaje hace que cobre especial relevancia la adecuación de los dispositivos de aprovechamiento de energía undimotriz (WEC: Wave Energy Converter) a su localización, de cara a mejorar su viabilidad económica. Parece razonable suponer que, en un futuro, el proceso de diseño de un parque de generación undimotriz implique un rediseño (en base a una tecnología conocida) para cada proyecto de implantación en una nueva localización. El objetivo de esta tesis es plantear un procedimiento de dimensionado de una tecnología de aprovechamiento de la energía undimotriz concreta: los absorbedores puntuales. Dicha metodología de diseño se plantea como un problema de optimización matemático, el cual se resuelve utilizando un algoritmo de optimización bioinspirado: evolución diferencial. Este planteamiento permite automatizar la fase previa de dimensionado implementando la metodología en un código de programación. El proceso de diseño de un WEC es un problema de ingería complejo, por lo que no considera factible el planteamiento de un diseño completo mediante un único procedimiento de optimización matemático. En vez de eso, se platea el proceso de diseño en diferentes etapas, de manera que la metodología desarrollada en esta tesis se utilice para obtener las dimensiones básicas de una solución de referencia de WEC, la cual será utilizada como punto de partida para continuar con las etapas posteriores del proceso de diseño. La metodología de dimensionado previo presentada en esta tesis parte de unas condiciones de contorno de diseño definidas previamente, tales como: localización, características del sistema de generación de energía eléctrica (PTO: Power Take-Off), estrategia de extracción de energía eléctrica y concepto concreto de WEC). Utilizando un algoritmo de evolución diferencial multi-objetivo se obtiene un conjunto de soluciones factibles (de acuerdo con una ciertas restricciones técnicas y dimensionales) y óptimas (de acuerdo con una serie de funciones objetivo de pseudo-coste y pseudo-beneficio). Dicho conjunto de soluciones o dimensiones de WEC es utilizado como caso de referencia en las posteriores etapas de diseño. En el documento de la tesis se presentan dos versiones de dicha metodología con dos modelos diferentes de evaluación de las soluciones candidatas. Por un lado, se presenta un modelo en el dominio de la frecuencia que presenta importantes simplificaciones en cuanto al tratamiento del recurso del oleaje. Este procedimiento presenta una menor carga computacional pero una mayor incertidumbre en los resultados, la cual puede traducirse en trabajo adicional en las etapas posteriores del proceso de diseño. Sin embargo, el uso de esta metodología resulta conveniente para realizar análisis paramétricos previos de las condiciones de contorno, tales como la localización seleccionada. Por otro lado, la segunda metodología propuesta utiliza modelos en el domino estocástico, lo que aumenta la carga computacional, pero permite obtener resultados con menos incertidumbre e información estadística muy útil para el proceso de diseño. Por este motivo, esta metodología es más adecuada para su uso en un proceso de dimensionado completo de un WEC. La metodología desarrollada durante la tesis ha sido utilizada en un proyecto industrial de evaluación energética preliminar de una planta de energía undimotriz. En dicho proceso de evaluación, el método de dimensionado previo fue utilizado en una primera etapa, de cara a obtener un conjunto de soluciones factibles de acuerdo con una serie de restricciones técnicas básicas. La selección y refinamiento de la geometría de la solución geométrica de WEC propuesta fue realizada a posteriori (por otros participantes del proyecto) utilizando un modelo detallado en el dominio del tiempo y un modelo de evaluación económica del dispositivo. El uso de esta metodología puede ayudar a reducir las iteraciones manuales y a mejorar los resultados obtenidos en estas últimas etapas del proyecto. ABSTRACT The energy transported by ocean waves (wave energy) is framed within the so-called oceanic energies. Its use to generate electric energy (or desalinate ocean water, etc.) is an idea expressed first time in a patent two centuries ago (1799). Ever since, but specially since the 1970’s, this energy has become interesting for R&D institutions and companies related with the technological and energetic sectors mainly because of the magnitude of available energy. Nowadays the development of this technology can be considered to be in a pre-commercial stage, with a wide range of devices and technologies developed to different degrees but with none standing out nor economically viable. Nor do these technologies seem ready to converge to a single device (or a reduce number of devices). The energy resource to be exploited shares its non-controllability with other renewable energy sources such as wind and solar. However, wave energy presents an additional short-term variability due to its oscillatory nature. Thus, different locations may show waves with similar energy content but different characteristics such as wave height or wave period. This variability in ocean waves makes it very important that the devices for harnessing wave energy (WEC: Wave Energy Converter) fit closely to the characteristics of their location in order to improve their economic viability. It seems reasonable to assume that, in the future, the process of designing a wave power plant will involve a re-design (based on a well-known technology) for each implementation project in any new location. The objective of this PhD thesis is to propose a dimensioning method for a specific wave-energy-harnessing technology: point absorbers. This design methodology is presented as a mathematical optimization problem solved by using an optimization bio-inspired algorithm: differential evolution. This approach allows automating the preliminary dimensioning stage by implementing the methodology in programmed code. The design process of a WEC is a complex engineering problem, so the complete design is not feasible using a single mathematical optimization procedure. Instead, the design process is proposed in different stages, so the methodology developed in this thesis is used for the basic dimensions of a reference solution of the WEC, which would be used as a starting point for the later stages of the design process. The preliminary dimensioning methodology presented in this thesis starts from some previously defined boundary conditions such as: location, power take-off (PTO) characteristic, strategy of energy extraction and specific WEC technology. Using a differential multi-objective evolutionary algorithm produces a set of feasible solutions (according to certain technical and dimensional constraints) and optimal solutions (according to a set of pseudo-cost and pseudo-benefit objective functions). This set of solutions or WEC dimensions are used as a reference case in subsequent stages of design. In the document of this thesis, two versions of this methodology with two different models of evaluation of candidate solutions are presented. On the one hand, a model in the frequency domain that has significant simplifications in the treatment of the wave resource is presented. This method implies a lower computational load but increased uncertainty in the results, which may lead to additional work in the later stages of the design process. However, use of this methodology is useful in order to perform previous parametric analysis of boundary conditions such as the selected location. On the other hand, the second method uses stochastic models, increasing the computational load, but providing results with smaller uncertainty and very useful statistical information for the design process. Therefore, this method is more suitable to be used in a detail design process for full dimensioning of the WEC. The methodology developed throughout the thesis has been used in an industrial project for preliminary energetic assessment of a wave energy power plant. In this assessment process, the method of previous dimensioning was used in the first stage, in order to obtain a set of feasible solutions according to a set of basic technical constraints. The geometry of the WEC was refined and selected subsequently (by other project participants) using a detailed model in the time domain and a model of economic evaluation of the device. Using this methodology can help to reduce the number of design iterations and to improve the results obtained in the last stages of the project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Root elongation, hematoxylin staining, and changes in the ultrastructure of root-tip cells of an Al-tolerant maize variety (Zea mays L. C 525 M) exposed to nutrient solutions with 20 μm Al (2.1 μm Al3+ activity) for 0, 4, and 24 h were investigated in relation to the subcellular distribution of Al using scanning transmission electron microscopy and energy-dispersive x-ray microanalysis on samples fixed by different methods. Inhibition of root-elongation rates, hematoxylin staining, cell wall thickening, and disturbance of the distribution of pyroantimoniate-stainable cations, mainly Ca, was observed only after 4 and not after 24 h of exposure to Al. The occurrence of these transient, toxic Al effects on root elongation and in cell walls was accompanied by the presence of solid Al-P deposits in the walls. Whereas no Al was detectable in cell walls after 24 h, an increase of vacuolar Al was observed after 4 h of exposure. After 24 h, a higher amount of electron-dense deposits containing Al and P or Si was observed in the vacuoles. These results indicate that in this tropical maize variety, tolerance mechanisms that cause a change in apoplastic Al must be active. Our data support the hypothesis that in Al-tolerant plants, Al can rapidly cross the plasma membrane; these data clearly contradict the former conclusions that Al mainly accumulates in the apoplast and enters the symplast only after severe cell damage has occurred.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de mestrado, Nutrição Clínica, Faculdade de Medicina, Universidade de Lisboa, 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current arrangements for multi-national company taxation in EU are plagued by severe conceptual and administrative problems, leading to high compliance costs, considerable uncertainty and ample room for abuse. Integration is amplifying these difficulties. There are two possible approaches in designing an efficient trans-border corporate tax system for the European Union. The first is to consolidate the EU-wide operations of MNEs, using an agreed common base as the reference variable, and then to apportion this total tax base using some presumptive indicators of activity in each tax jurisdiction – hence, implicitly, of the likely benefits stemming from each location. The apportionment formula should respect requisites of neutrality between productive factors and forms of corporate financing. A radically different approach is also available that offers considerable advantages in terms of efficiency, simplicity and decentralisation, including full administrative autonomy of national tax authorities. It entails abandoning corporate income as the relevant tax base and taxing at a moderate rate some agreed measure of business activity such as company value added, sales or employment. These are the variables usually considered in formula apportionment, but they would apply directly without having first to go through the complications of EU-wide consolidation based on a common-base definition. Reference to a broad base, with no exemptions or deductions, would allow to set low statutory rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"The addresses ... were given at various times and upon various occasions ... between 1927 and 1936."

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the early 20th century, authors increasingly experimented with literary techniques striving towards two common aims: to illumine the inner life of their protagonists and to diverge from conventional forms of literary representations of reality. This shared endeavour was sparked by changes in society: industrialisation, developments in psychology, and the gradual decay of empires, such as the Victorian (1837–1901) and the Austro-Hungarian (1867–1918). Those developments yielded a sense of uncertainty and disorientation, which led to a so-called “turn [inwards]” in the arts (Micale 2). In this context, this essay examines Virginia Woolf’s (1882–1941) development of her literary technique by comparing To the Lighthouse (1927), written in free indirect discourse, with Arthur Schnitzler’s (1862–1932) Fräulein Else (1924), written in interior monologue. Instead of applying Freud’s theories of consciousness, I will demonstrate how empiricist psychology informed and partly helped shape the two narrative techniques by referring to Ernst Mach’s (1838–1916) idea of the unstable self, and William James’ (1842–1910) concept of the stream of consciousness. Furthermore, I will show that there is a continuous progression of literary ideas from Schnitzler’s Viennese fin-de-siècle connected to impressionism, towards Woolf’s Bloomsbury aesthetics connected to Paul Cézanne’s post-impressionist logic of sensations. In addition to that, I address how the women’s movement, starting in the end of the 19th century, inspired Woolf and Schnitzler to utilise their techniques as a means of revealing women’s restricted position in society. Methodologically, I will analyse the two novels’ narrative techniques applying close reading and by that point out their differences and similarities in connection to the above-mentioned theories as well as the two author’s literary approaches. I argue that this comparison demonstrates that modernist literary techniques of representing interiority evolved from interior monologue towards free indirect discourse. This progression also implicates that modernism can be seen as a continuum reaching back to the fin-de-siècle and culminating in the 1920s. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: Recent data from Education Queensland has identified rising numbers of children receiving diagnoses of autistic spectrum disorder (ASD). Faced with funding diagnostic pressures, in clinical situations that are complex and inherently uncertain, it is possible that specialists err on the side of a positive diagnosis. This study examines the extent to which possible overinclusion of ASD diagnosis may exist in the presence of uncertainty and factors potentially related to this practice in Queensland. Methods: Using anonymous self-report, all Queensland child psychiatrists and paediatricians who see paediatric patients with development/behavioural problems were surveyed and asked whether they had ever specified an ASD diagnosis in the presence of diagnostic uncertainty. Using logistic regression, elicited responses to the diagnostic uncertainty questions were related to other clinical- and practice-related characteristics. Results: Overall, 58% of surveyed psychiatrists and paediatricians indicated that, in the face of diagnostic uncertainty, they had erred on the side of providing an ASD diagnosis for educational ascertainment and 36% of clinicians had provided an autism diagnosis for Carer's Allowance when Centrelink diagnostic specifications had not been met. Conclusion: In the absence of definitive biological markers, ASD remains a behavioural diagnosis that is often complex and uncertain. In response to systems that demand a categorical diagnostic response, specialists are providing ASD diagnoses, even when uncertain. The motivation for this practice appears to be a clinical risk/benefit analysis of what will achieve the best outcomes for children. It is likely that these practices will continue unless systems change eligibility to funding based on functional impairment rather than medical diagnostic categories.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: To validate verbal autopsy (VA) procedures for use in sample vital registration. Verbal autopsy is an important method for deriving cause-specific mortality estimates where disease burdens are greatest and routine cause-specific mortality data do not exist. Methods: Verbal autopsies and medical records (MR) were collected for 3123 deaths in the perinatal/neonatal period, post-neonatal < 5 age group, and for ages of 5 years and over in Tanzania. Causes of death were assigned by physician panels using the International Classification of Disease, revision 10. Validity was measured by: cause-specific mortality fractions (CSMF); sensitivity; specificity and positive predictive value. Medical record diagnoses were scored for degree of uncertainty, and sensitivity and specificity adjusted. Criteria for evaluating VA performance in generating true proportional mortality were applied. Results: Verbal autopsy produced accurate CSMFs for nine causes in different age groups: birth asphyxia; intrauterine complications; pneumonia; HIV/AIDS; malaria (adults); tuberculosis; cerebrovascular diseases; injuries and direct maternal causes. Results for 20 other causes approached the threshold for good performance. Conclusions: Verbal autopsy reliably estimated CSMFs for diseases of public health importance in all age groups. Further validation is needed to assess reasons for lack of positive results for some conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical methods in systematic conservation planning (SCP) represent a significant step toward cost-effective, transparent allocation of resources for biodiversity conservation. However, research demonstrates important consequences of uncertainties in SCP. Current research often relies on simplified case studies with unknown forms and amounts of uncertainty and low statistical power for generalizing results. Consequently, conservation managers have little evidence for the true performance of conservation planning methods in their own complex, uncertain applications. SCP needs to build evidence for predictive models of error and robustness to multiple, simultaneous uncertainties across a wide range of problems of known complexity. Only then can we determine true performance rather than how a method appears to perform on data with unknown uncertainty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years there has been a great effort to combine the technologies and techniques of GIS and process models. This project examines the issues of linking a standard current generation 2½d GIS with several existing model codes. The focus for the project has been the Shropshire Groundwater Scheme, which is being developed to augment flow in the River Severn during drought periods by pumping water from the Shropshire Aquifer. Previous authors have demonstrated that under certain circumstances pumping could reduce the soil moisture available for crops. This project follows earlier work at Aston in which the effects of drawdown were delineated and quantified through the development of a software package that implemented a technique which brought together the significant spatially varying parameters. This technique is repeated here, but using a standard GIS called GRASS. The GIS proved adequate for the task and the added functionality provided by the general purpose GIS - the data capture, manipulation and visualisation facilities - were of great benefit. The bulk of the project is concerned with examining the issues of the linkage of GIS and environmental process models. To this end a groundwater model (Modflow) and a soil moisture model (SWMS2D) were linked to the GIS and a crop model was implemented within the GIS. A loose-linked approach was adopted and secondary and surrogate data were used wherever possible. The implications of which relate to; justification of a loose-linked versus a closely integrated approach; how, technically, to achieve the linkage; how to reconcile the different data models used by the GIS and the process models; control of the movement of data between models of environmental subsystems, to model the total system; the advantages and disadvantages of using a current generation GIS as a medium for linking environmental process models; generation of input data, including the use of geostatistic, stochastic simulation, remote sensing, regression equations and mapped data; issues of accuracy, uncertainty and simply providing adequate data for the complex models; how such a modelling system fits into an organisational framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aerial photography was used to determine the land use in a test area of the Nigerian savanna in 1950 and 1972. Changes in land use were determined and correlated with accessibility, appropriate low technology methods being used to make it easy to extend the investigation to other areas without incurring great expense. A test area of 750 sq km was chosen located in Kaduna State of Nigeria. The geography of the area is summarised together with the local knowledge which is essential for accurate photo interpretation. A land use classification was devised and tested for use with medium scale aerial photography of the savanna. The two sets of aerial photography at 1:25 000 scale were sampled using systematic dot grids. A dot density of 8 1/2 dots per sq km was calculated to give an acceptable estimate of land use. Problems of interpretation included gradation between categories, sample position uncertainty and personal bias. The results showed that in 22 years the amount of cultivated land in the test area had doubled while there had been a corresponding decrease in the amount of uncultivated land particularly woodland. The intensity of land use had generally increased. The distribution of land use changes was analysed and correlated with accessibility. Highly significant correlations were found for 1972 which had not existed in 1950. Changes in land use could also be correlated with accessibility. It was concluded that in the 22 year test period there had been intensification of land use, movement of human activity towards the main road, and a decrease in natural vegetation particularly close to the road. The classification of land use and the dot grid method of survey were shown to be applicable to a savanna test area.