139 resultados para Deterministic walkers
Resumo:
The participatory turn, fuelled by discourses and rhetoric regarding social media, and in the aftermath of the dot.com crash of the early 2000s, enrols to some extent an idea of being able to deploy networks to achieve institutional aims. The arts and cultural sector in the UK, in the face of funding cuts, has been keen to engage with such ideas in order to demonstrate value for money; by improving the efficiency of their operations, improving their respective audience experience and ultimately increasing audience size and engagement. Drawing on a case study compiled via a collaborative research project with a UK-based symphony orchestra (UKSO) we interrogate the potentials of social media engagement for audience development work through participatory media and networked publics. We argue that the literature related to mobile phones and applications (‘apps’) has focused primarily on marketing for engagement where institutional contexts are concerned. In contrast, our analysis elucidates the broader potentials and limitations of social-media-enabled apps for audience development and engagement beyond a marketing paradigm. In the case of UKSO, it appears that the technologically deterministic discourses often associated with institutional enrolment of participatory media and networked publics may not necessarily apply due to classical music culture. More generally, this work raises the contradictory nature of networked publics and argues for increased critical engagement with the concept.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
In the context of modern western psychologised, techno-social hybrid realities, where individuals are incited constantly to work on themselves and perform their self-development in public, the use of online social networking sites (SNSs) can be conceptualised as what Foucault has described as a ‘technique of self’. This article explores examples of status updates on Facebook to reveal that writing on Facebook is a tool for self-formation with historical roots. Exploring examples of self-writing from the past, and considering some of the continuities and discontinuities between these age-old practices and their modern translations, provides a non-technologically deterministic and historically aware way of thinking about the use of new media technologies in modern societies that understands them to be more than mere tools for communication.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.
Resumo:
An important responsibility of the Environment Protection Authority, Victoria, is to set objectives for levels of environmental contaminants. To support the development of environmental objectives for water quality, a need has been identified to understand the dual impacts of concentration and duration of a contaminant on biota in freshwater streams. For suspended solids contamination, information reported by Newcombe and Jensen [ North American Journal of Fisheries Management , 16(4):693--727, 1996] study of freshwater fish and the daily suspended solids data from the United States Geological Survey stream monitoring network is utilised. The study group was requested to examine both the utility of the Newcombe and Jensen and the USA data, as well as the formulation of a procedure for use by the Environment Protection Authority Victoria that takes concentration and duration of harmful episodes into account when assessing water quality. The extent to which the impact of a toxic event on fish health could be modelled deterministically was also considered. It was found that concentration and exposure duration were the main compounding factors on the severity of effects of suspended solids on freshwater fish. A protocol for assessing the cumulative effect on fish health and a simple deterministic model, based on the biology of gill harm and recovery, was proposed. References D. W. T. Au, C. A. Pollino, R. S. S Wu, P. K. S. Shin, S. T. F. Lau, and J. Y. M. Tang. Chronic effects of suspended solids on gill structure, osmoregulation, growth, and triiodothyronine in juvenile green grouper epinephelus coioides . Marine Ecology Press Series , 266:255--264, 2004. J.C. Bezdek, S.K. Chuah, and D. Leep. Generalized k-nearest neighbor rules. Fuzzy Sets and Systems , 18:237--26, 1986. E. T. Champagne, K. L. Bett-Garber, A. M. McClung, and C. Bergman. {Sensory characteristics of diverse rice cultivars as influenced by genetic and environmental factors}. Cereal Chem. , {81}:{237--243}, {2004}. S. G. Cheung and P. K. S. Shin. Size effects of suspended particles on gill damage in green-lipped mussel perna viridis. Marine Pollution Bulletin , 51(8--12):801--810, 2005. D. H. Evans. The fish gill: site of action and model for toxic effects of environmental pollutants. Environmental Health Perspectives , 71:44--58, 1987. G. C. Grigg. The failure of oxygen transport in a fish at low levels of ambient oxygen. Comp. Biochem. Physiol. , 29:1253--1257, 1969. G. Holmes, A. Donkin, and I.H. Witten. {Weka: A machine learning workbench}. In Proceedings of the Second Australia and New Zealand Conference on Intelligent Information Systems , volume {24}, pages {357--361}, {Brisbane, Australia}, {1994}. {IEEE Computer Society}. D. D. Macdonald and C. P. Newcombe. Utility of the stress index for predicting suspended sediment effects: response to comments. North American Journal of Fisheries Management , 13:873--876, 1993. C. P. Newcombe. Suspended sediment in aquatic ecosystems: ill effects as a function of concentration and duration of exposure. Technical report, British Columbia Ministry of Environment, Lands and Parks, Habitat Protection branch, Victoria, 1994. C. P. Newcombe and J. O. T. Jensen. Channel suspended sediment and fisheries: A synthesis for quantitative assessment of risk and impact. North American Journal of Fisheries Management , 16(4):693--727, 1996. C. P. Newcombe and D. D. Macdonald. Effects of suspended sediments on aquatic ecosystems. North American Journal of Fisheries Management , 11(1):72--82, 1991. K. Schmidt-Nielsen. Scaling. Why is animal size so important? Cambridge University Press, NY, 1984. J. S. Schwartz, A. Simon, and L. Klimetz. Use of fish functional traits to associate in-stream suspended sediment transport metrics with biological impairment. Environmental Monitoring and Assessment , 179(1--4):347--369, 2011. E. Al Shaw and J. S. Richardson. Direct and indirect effects of sediment pulse duration on stream invertebrate assemb ages and rainbow trout ( Oncorhynchus mykiss ) growth and survival. Canadian Journal of Fish and Aquatic Science , 58:2213--2221, 2001. P. Tiwari and H. Hasegawa. {Demand for housing in Tokyo: A discrete choice analysis}. Regional Studies , {38}:{27--42}, {2004}. Y. Tramblay, A. Saint-Hilaire, T. B. M. J. Ouarda, F. Moatar, and B Hecht. Estimation of local extreme suspended sediment concentrations in california rivers. Science of the Total Environment , 408:4221--
Resumo:
Two lecture notes describe recent developments of evolutionary multi objective optimization (MO) techniques in detail and their advantages and drawbacks compared to traditional deterministic optimisers. The role of Game Strategies (GS), such as Pareto, Nash or Stackelberg games as companions or pre-conditioners of Multi objective Optimizers is presented and discussed on simple mathematical functions in Part I , as well as their implementations on simple aeronautical model optimisation problems on the computer using a friendly design framework in Part II. Real life (robust) design applications dealing with UAVs systems or Civil Aircraft and using the EAs and Game Strategies combined material of Part I & Part II are solved and discussed in Part III providing the designer new compromised solutions useful to digital aircraft design and manufacturing. Many details related to Lectures notes Part I, Part II and Part III can be found by the reader in [68].
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
In this paper the method of renormalization group (RG) [Phys. Rev. E 54, 376 (1996)] is related to the well-known approximations of Rytov and Born used in wave propagation in deterministic and random media. Certain problems in linear and nonlinear media are examined from the viewpoint of RG and compared with the literature on Born and Rytov approximations. It is found that the Rytov approximation forms a special case of the asymptotic expansion generated by the RG, and as such it gives a superior approximation to the exact solution compared with its Born counterpart. Analogous conclusions are reached for nonlinear equations with an intensity-dependent index of refraction where the RG recovers the exact solution. © 2008 Optical Society of America.
Resumo:
Vertical graphene nanosheets (VGNS) hold great promise for high-performance supercapacitors owing to their excellent electrical transport property, large surface area and in particular, an inherent three-dimensional, open network structure. However, it remains challenging to materialise the VGNS-based supercapacitors due to their poor specific capacitance, high temperature processing, poor binding to electrode support materials, uncontrollable microstructure, and non-cost effective way of fabrication. Here we use a single-step, fast, scalable, and environmentally-benign plasma-enabled method to fabricate VGNS using cheap and spreadable natural fatty precursor butter, and demonstrate the controllability over the degree of graphitization and the density of VGNS edge planes. Our VGNS employed as binder-free supercapacitor electrodes exhibit high specific capacitance up to 230 F g−1 at a scan rate of 10 mV s−1 and >99% capacitance retention after 1,500 charge-discharge cycles at a high current density, when the optimum combination of graphitic structure and edge plane effects is utilised. The energy storage performance can be further enhanced by forming stable hybrid MnO2/VGNS nano-architectures which synergistically combine the advantages from both VGNS and MnO2. This deterministic and plasma-unique way of fabricating VGNS may open a new avenue for producing functional nanomaterials for advanced energy storage devices.
Resumo:
The approach to control the elementary processes of plasma–surface interactions to direct the fluxes of energy and matter at nano- and subnanometer scales is introduced. This ability is related to the solution of the grand challenge of directing energy and matter at nanoscales and is critical for the renewable energy and energy-efficient technologies for a sustainable future development. The examples of deterministic synthesis of self-organized arrays of metastable nanostructures in the size range beyond the reach of the present-day nanofabrication are considered to illustrate this possibility. By using precisely controlled and kinetically fast nanoscale transfer of energy and matter under nonequilibrium conditions and harnessing numerous plasma-specific controls of species creation, delivery to the surface,nucleation, and large-scale self-organization of nuclei and nanostructures, the arrays of metastable nanostructures can be created, arranged, stabilized, and further processed to meet the specific requirements of the envisaged applications.
Resumo:
Deterministic synthesis of self-organized quantum dot arrays for renewable energy, biomedical, and optoelectronic applications requires control over adatom capture zones, which are presently mapped using unphysical geometric tessellation. In contrast, the proposed kinetic mapping is based on simulated two-dimensional adatom fluxes in the array and includes the effects of nucleation, dissolution, coalescence, and process parameters such as surface temperature and deposition rate. This approach is generic and can be used to control the nanoarray development in various practical applications. © 2009 American Institute of Physics.
Resumo:
The possibility of independent control of the surface fluxes of energy and hydrogen-containing radicals, thus enabling selective control of the nanostructure heating and passivation, is demonstrated. In situ energy flux measurements reveal that even a small addition of H2 to low-pressure Ar plasmas leads to a dramatic increase in the energy deposition through H recombination on the surface. The heat release is quenched by a sequential addition of a hydrocarbon precursor while the surface passivation remains effective. Such selective control offers an effective mechanism for deterministic control of the growth shape, crystallinity, and density of nanostructures in plasma-aided nanofabrication. © 2010 American Institute of Physics.
Resumo:
This feature article introduces a deterministic approach for the rapid, single-step, direct synthesis of metal oxide nanowires. This approach is based on the exposure of thin metal samples to reactive oxygen plasmas and does not require any intervening processing or external substrate heating. The critical roles of the reactive oxygen plasmas, surface processes, and plasma-surface interactions that enable this growth are critically examined by using a deterministic viewpoint. The essentials of the experimental procedures and reactor design are presented and related to the key process requirements. The nucleation and growth kinetics is discussed for typical solid-liquid-solid and vapor-solid-solid mechanisms related to the synthesis of the oxide nanowires of metals with low (Ga, Cd) and high (Fe) melting points, respectively. Numerical simulations are focused on the possibility to predict the nanowire nucleation points through the interaction of the plasma radicals and ions with the nanoscale morphological features on the surface, as well as to control the localized 'hot spots' that in turn determine the nanowire size and shape. This generic approach can be applied to virtually any oxide nanoscale system and further confirms the applicability of the plasma nanoscience approaches for deterministic nanoscale synthesis and processing.
Resumo:
The unique properties of graphene and carbon nanotubes made them the most promising nanomaterials attracting enormous attention, due to the prospects for applications in various nanodevices, from nanoelectronics to sensors and energy conversion devices. Here we report on a novel deterministic, single-step approach to simultaneous production and magnetic separation of graphene flakes and carbon nanotubes in an arc discharge by splitting the high-temperature growth and low-temperature separation zones using a non-uniform magnetic field and tailor-designed catalyst alloy, and depositing nanotubes and graphene in different areas. Our results are very relevant to the development of commercially-viable, single-step production of bulk amounts of high-quality graphene.
Resumo:
This article introduces a deterministic approach to using low-temperature, thermally non-equilibrium plasmas to synthesize delicate low-dimensional nanostructures of a small number of atoms on plasma exposed surfaces. This approach is based on a set of plasma-related strategies to control elementary surface processes, an area traditionally covered by surface science. Major issues related to balanced delivery and consumption of building units, appropriate choice of process conditions, and account of plasma-related electric fields, electric charges and polarization effects are identified and discussed in the quantum dot nanoarray context. Examples of a suitable plasma-aided nanofabrication facility and specific effects of a plasma-based environment on self-organized growth of size- and position-uniform nanodot arrays are shown. These results suggest a very positive outlook for using low-temperature plasma-based nanotools in high-precision nanofabrication of self-assembled nanostructures and elements of nanodevices, one of the areas of continuously rising demand from academia and industry.