878 resultados para Uncertainty in generation
Resumo:
One of the new challenges in aeronautics is combining and accounting for multiple disciplines while considering uncertainties or variability in the design parameters or operating conditions. This paper describes a methodology for robust multidisciplinary design optimisation when there is uncertainty in the operating conditions. The methodology, which is based on canonical evolution algorithms, is enhanced by its coupling with an uncertainty analysis technique. The paper illustrates the use of this methodology on two practical test cases related to Unmanned Aerial Systems (UAS). These are the ideal candidates due to the multi-physics involved and the variability of missions to be performed. Results obtained from the optimisation show that the method is effective to find useful Pareto non-dominated solutions and demonstrate the use of robust design techniques.
Resumo:
Generally speaking, psychologists have suggested three traditional views of how people cope with uncertainty. They are the certainty maximiser, the intuitive statistician-economist and the knowledge seeker (Smithson, 2008). In times of uncertainty, such as the recent global financial crisis, these coping methods often result in innovation in industry. Richards (2003) identifies innovation as different from creativity in that innovation aims to transform and implement rather than simply explore and invent. An examination of the work of iconic fashion designers, through case study and situational analysis, reveals that coping with uncertainty manifests itself in ways that have resulted in innovations in design, marketing methods, production and consumption. In relation to contemporary fashion, where many garments look the same in style, colour, cut and fit (Finn, 2008), the concept of innovation is an important one. This paper explores the role of uncertainty as a driver of innovation in fashion design. A key aspect of seeking knowledge, as a mechanism to cope with this uncertainty, is a return to basics. This is a problem for contemporary fashion designers who are no longer necessarily makers and therefore do not engage with the basic materials and methods of garment construction. In many cases design in fashion has become digital, communicated to an unseen, unknown production team via scanned image and specification alone. The disconnection between the design and the making of garments, as a result of decades of off-shore manufacturing, has limited the opportunity for this return to basics. The authors argue that the role of the fashion designer has become about the final product and as a result there is a lack of innovation in the process of making: in the form, fit and function of fashion garments. They propose that ‘knowledge seeking’ as a result of uncertainty in the fashion industry, in particular through re-examination of the methods of making, could hold the key to a new era of innovation in fashion design.
Resumo:
Children’s literature has conventionally and historically been concerned with identity and the often tortuous journey to becoming a subject who is generally older and wiser, a journey typically characterised by mishap, adventure, and detours. Narrative closure in children’s and young adult novels and films typically provides a point of self-realisation or self-actualisation, whereby the struggles of finding one’s “true” identity have been overcome. In this familiar coming-of-age narrative, there is often an underlying premise of an essential self that will emerge or be uncovered. This kind of narrative resolution provides readers with a reassurance that things will work for the best in the end, which is an enduring feature of children’s literature, and part of liberal-humanism’s project of harmonious individuality. However, uncertainty is a constant that has always characterised the ways lives are lived, regardless of best-laid plans. Children’s literature provides a field of narrative knowledge whereby readers gain impressions of childhood and adolescence, or more specifically, knowledge of ways of being at a time in life, which is marked by uncertainty. Despite the prevalence of children’s texts which continue to offer normative ways of being, in particular, normative forms of gender behaviour, there are texts which resist the pull for characters to be “like everyone else” by exploring alternative subjectivities. Fiction, however, cannot be regarded as a source of evidence about the material realities of life, as its strength lies in its affective and imaginative dimensions, which nevertheless can offer readers moments of reflection, recognition, or, in some cases, reality lessons. As a form of cultural production, contemporary children’s literature is highly responsive to social change and political debates, and is crucially implicated in shaping the values, attitudes and behaviours of children and young people.
Resumo:
We develop a stochastic endogenous growth model to explain the diversity in growth and inequality patterns and the non-convergence of incomes in transitional economies where an underdeveloped financial sector imposes an implicit, fixed cost on the diversification of idiosyncratic risk. In the model endogenous growth occurs through physical and human capital deepening, with the latter being the more dominant element. We interpret the fixed cost as a ‘learning by doing’ cost for entrepreneurs who undertake risk in the absence of well developed financial markets and institutions that help diversify such risk. As such, this cost may be interpreted as the implicit returns foregone due to the lack of diversification opportunities that would otherwise have been available, had such institutions been present. The analytical and numerical results of the model suggest three growth outcomes depending on the productivity differences between the projects and the fixed cost associated with the more productive project. We label these outcomes as poverty trap, dual economy and balanced growth. Further analysis of these three outcomes highlights the existence of a diversity within diversity. Specifically, within the ‘poverty trap’ and ‘dual economy’ scenarios growth and inequality patterns differ, depending on the initial conditions. This additional diversity allows the model to capture a richer range of outcomes that are consistent with the empirical experience of several transitional economies.
Resumo:
Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Many knowledge based systems (KBS) transform a situation information into an appropriate decision using an in built knowledge base. As the knowledge in real world situation is often uncertain, the degree of truth of a proposition provides a measure of uncertainty in the underlying knowledge. This uncertainty can be evaluated by collecting `evidence' about the truth or falsehood of the proposition from multiple sources. In this paper we propose a simple framework for representing uncertainty in using the notion of an evidence space.
Resumo:
A two-stage methodology is developed to obtain future projections of daily relative humidity in a river basin for climate change scenarios. In the first stage, Support Vector Machine (SVM) models are developed to downscale nine sets of predictor variables (large-scale atmospheric variables) for Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios (SRES) (A1B, A2, B1, and COMMIT) to R (H) in a river basin at monthly scale. Uncertainty in the future projections of R (H) is studied for combinations of SRES scenarios, and predictors selected. Subsequently, in the second stage, the monthly sequences of R (H) are disaggregated to daily scale using k-nearest neighbor method. The effectiveness of the developed methodology is demonstrated through application to the catchment of Malaprabha reservoir in India. For downscaling, the probable predictor variables are extracted from the (1) National Centers for Environmental Prediction reanalysis data set for the period 1978-2000 and (2) simulations of the third-generation Canadian Coupled Global Climate Model for the period 1978-2100. The performance of the downscaling and disaggregation models is evaluated by split sample validation. Results show that among the SVM models, the model developed using predictors pertaining to only land location performed better. The R (H) is projected to increase in the future for A1B and A2 scenarios, while no trend is discerned for B1 and COMMIT.
Resumo:
Wavelet coefficients based on spatial wavelets are used as damage indicators to identify the damage location as well as the size of the damage in a laminated composite beam with localized matrix cracks. A finite element model of the composite beam is used in conjunction with a matrix crack based damage model to simulate the damaged composite beam structure. The modes of vibration of the beam are analyzed using the wavelet transform in order to identify the location and the extent of the damage by sensing the local perturbations at the damage locations. The location of the damage is identified by a sudden change in spatial distribution of wavelet coefficients. Monte Carlo Simulations (MCS) are used to investigate the effect of ply level uncertainty in composite material properties such as ply longitudinal stiffness, transverse stiffness, shear modulus and Poisson's ratio on damage detection parameter, wavelet coefficient. In this study, numerical simulations are done for single and multiple damage cases. It is observed that spatial wavelets can be used as a reliable damage detection tool for composite beams with localized matrix cracks which can result from low velocity impact damage.
Resumo:
In this paper we consider the problem of guided wave scattering from delamination in laminated composite and further the problem of estimating delamination size and layer-wise location from the guided wave measurement. Damage location and region/size can be estimated from time of flight and wave packet spread, whereas depth information can be obtained from wavenumber modulation in the carrier packet. The key challenge is that these information are highly sensitive to various uncertainties. Variation in reflected and transmitted wave amplitude in a bar due to boundary/interface uncertainty is studied to illustrate such effect. Effect of uncertainty in material parameters on the time of flight are estimated for longitudinal wave propagation. To evaluate the effect of uncertainty in delamination detection, we employ a time domain spectral finite element (tSFEM) scheme where wave propagation is modeled using higher-order interpolation with shape function have spectral convergence properties. A laminated composite beam with layer-wise placement of delamination is considered in the simulation. Scattering due to the presence of delamination is analyzed. For a single delamination, two identical waveforms are created at the two fronts of the delamination, whereas waves in the two sub-laminates create two independent waveforms with different wavelengths. Scattering due to multiple delaminations in composite beam is studied.
Resumo:
Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.
Resumo:
At medium to high frequencies the dynamic response of a built-up engineering system, such as an automobile, can be sensitive to small random manufacturing imperfections. Ideally the statistics of the system response in the presence of these uncertainties should be computed at the design stage, but in practice this is an extremely difficult task. In this paper a brief review of the methods available for the analysis of systems with uncertainty is presented, and attention is then focused on two particular "non- parametric" methods: statistical energy analysis (SEA), and the hybrid method. The main governing equations are presented, and a number of example applications are considered, ranging from academic benchmark studies to industrial design studies. © 2009 IOP Publishing Ltd.