914 resultados para Deterministic nanofabrication
Resumo:
Taguchi method is for the first time applied to optimize the synthesis of graphene films by copper-catalyzed decomposition of ethanol. In order to find the most appropriate experimental conditions for the realization of thin high-grade films, six experiments suitably designed and performed. The influence of temperature (1000–1070 °C) and synthesis duration (1–30 min) and hydrogen flow (0–100 sccm) on the number of graphene layers and defect density in the graphitic lattice was ranked by monitoring the intensity of the 2D- and D-bands relative to the G-band in the Raman spectra. After critical examination and adjusting of the conditions predicted to give optimal results, a continuous film consisting of 2–4 nearly defect-free graphene layers was obtained.
Resumo:
Diagnostics of rolling element bearings involves a combination of different techniques of signal enhancing and analysis. The most common procedure presents a first step of order tracking and synchronous averaging, able to remove the undesired components, synchronous with the shaft harmonics, from the signal, and a final step of envelope analysis to obtain the squared envelope spectrum. This indicator has been studied thoroughly, and statistically based criteria have been obtained, in order to identify damaged bearings. The statistical thresholds are valid only if all the deterministic components in the signal have been removed. Unfortunately, in various industrial applications, characterized by heterogeneous vibration sources, the first step of synchronous averaging is not sufficient to eliminate completely the deterministic components and an additional step of pre-whitening is needed before the envelope analysis. Different techniques have been proposed in the past with this aim: The most widely spread are linear prediction filters and spectral kurtosis. Recently, a new technique for pre-whitening has been proposed, based on cepstral analysis: the so-called cepstrum pre-whitening. Owing to its low computational requirements and its simplicity, it seems a good candidate to perform the intermediate pre-whitening step in an automatic damage recognition algorithm. In this paper, the effectiveness of the new technique will be tested on the data measured on a full-scale industrial bearing test-rig, able to reproduce the harsh conditions of operation. A benchmark comparison with the traditional pre-whitening techniques will be made, as a final step for the verification of the potentiality of the cepstrum pre-whitening.
Resumo:
Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.
Resumo:
The participatory turn, fuelled by discourses and rhetoric regarding social media, and in the aftermath of the dot.com crash of the early 2000s, enrols to some extent an idea of being able to deploy networks to achieve institutional aims. The arts and cultural sector in the UK, in the face of funding cuts, has been keen to engage with such ideas in order to demonstrate value for money; by improving the efficiency of their operations, improving their respective audience experience and ultimately increasing audience size and engagement. Drawing on a case study compiled via a collaborative research project with a UK-based symphony orchestra (UKSO) we interrogate the potentials of social media engagement for audience development work through participatory media and networked publics. We argue that the literature related to mobile phones and applications (‘apps’) has focused primarily on marketing for engagement where institutional contexts are concerned. In contrast, our analysis elucidates the broader potentials and limitations of social-media-enabled apps for audience development and engagement beyond a marketing paradigm. In the case of UKSO, it appears that the technologically deterministic discourses often associated with institutional enrolment of participatory media and networked publics may not necessarily apply due to classical music culture. More generally, this work raises the contradictory nature of networked publics and argues for increased critical engagement with the concept.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
In the context of modern western psychologised, techno-social hybrid realities, where individuals are incited constantly to work on themselves and perform their self-development in public, the use of online social networking sites (SNSs) can be conceptualised as what Foucault has described as a ‘technique of self’. This article explores examples of status updates on Facebook to reveal that writing on Facebook is a tool for self-formation with historical roots. Exploring examples of self-writing from the past, and considering some of the continuities and discontinuities between these age-old practices and their modern translations, provides a non-technologically deterministic and historically aware way of thinking about the use of new media technologies in modern societies that understands them to be more than mere tools for communication.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.
Resumo:
An important responsibility of the Environment Protection Authority, Victoria, is to set objectives for levels of environmental contaminants. To support the development of environmental objectives for water quality, a need has been identified to understand the dual impacts of concentration and duration of a contaminant on biota in freshwater streams. For suspended solids contamination, information reported by Newcombe and Jensen [ North American Journal of Fisheries Management , 16(4):693--727, 1996] study of freshwater fish and the daily suspended solids data from the United States Geological Survey stream monitoring network is utilised. The study group was requested to examine both the utility of the Newcombe and Jensen and the USA data, as well as the formulation of a procedure for use by the Environment Protection Authority Victoria that takes concentration and duration of harmful episodes into account when assessing water quality. The extent to which the impact of a toxic event on fish health could be modelled deterministically was also considered. It was found that concentration and exposure duration were the main compounding factors on the severity of effects of suspended solids on freshwater fish. A protocol for assessing the cumulative effect on fish health and a simple deterministic model, based on the biology of gill harm and recovery, was proposed. References D. W. T. Au, C. A. Pollino, R. S. S Wu, P. K. S. Shin, S. T. F. Lau, and J. Y. M. Tang. Chronic effects of suspended solids on gill structure, osmoregulation, growth, and triiodothyronine in juvenile green grouper epinephelus coioides . Marine Ecology Press Series , 266:255--264, 2004. J.C. Bezdek, S.K. Chuah, and D. Leep. Generalized k-nearest neighbor rules. Fuzzy Sets and Systems , 18:237--26, 1986. E. T. Champagne, K. L. Bett-Garber, A. M. McClung, and C. Bergman. {Sensory characteristics of diverse rice cultivars as influenced by genetic and environmental factors}. Cereal Chem. , {81}:{237--243}, {2004}. S. G. Cheung and P. K. S. Shin. Size effects of suspended particles on gill damage in green-lipped mussel perna viridis. Marine Pollution Bulletin , 51(8--12):801--810, 2005. D. H. Evans. The fish gill: site of action and model for toxic effects of environmental pollutants. Environmental Health Perspectives , 71:44--58, 1987. G. C. Grigg. The failure of oxygen transport in a fish at low levels of ambient oxygen. Comp. Biochem. Physiol. , 29:1253--1257, 1969. G. Holmes, A. Donkin, and I.H. Witten. {Weka: A machine learning workbench}. In Proceedings of the Second Australia and New Zealand Conference on Intelligent Information Systems , volume {24}, pages {357--361}, {Brisbane, Australia}, {1994}. {IEEE Computer Society}. D. D. Macdonald and C. P. Newcombe. Utility of the stress index for predicting suspended sediment effects: response to comments. North American Journal of Fisheries Management , 13:873--876, 1993. C. P. Newcombe. Suspended sediment in aquatic ecosystems: ill effects as a function of concentration and duration of exposure. Technical report, British Columbia Ministry of Environment, Lands and Parks, Habitat Protection branch, Victoria, 1994. C. P. Newcombe and J. O. T. Jensen. Channel suspended sediment and fisheries: A synthesis for quantitative assessment of risk and impact. North American Journal of Fisheries Management , 16(4):693--727, 1996. C. P. Newcombe and D. D. Macdonald. Effects of suspended sediments on aquatic ecosystems. North American Journal of Fisheries Management , 11(1):72--82, 1991. K. Schmidt-Nielsen. Scaling. Why is animal size so important? Cambridge University Press, NY, 1984. J. S. Schwartz, A. Simon, and L. Klimetz. Use of fish functional traits to associate in-stream suspended sediment transport metrics with biological impairment. Environmental Monitoring and Assessment , 179(1--4):347--369, 2011. E. Al Shaw and J. S. Richardson. Direct and indirect effects of sediment pulse duration on stream invertebrate assemb ages and rainbow trout ( Oncorhynchus mykiss ) growth and survival. Canadian Journal of Fish and Aquatic Science , 58:2213--2221, 2001. P. Tiwari and H. Hasegawa. {Demand for housing in Tokyo: A discrete choice analysis}. Regional Studies , {38}:{27--42}, {2004}. Y. Tramblay, A. Saint-Hilaire, T. B. M. J. Ouarda, F. Moatar, and B Hecht. Estimation of local extreme suspended sediment concentrations in california rivers. Science of the Total Environment , 408:4221--
Resumo:
Two lecture notes describe recent developments of evolutionary multi objective optimization (MO) techniques in detail and their advantages and drawbacks compared to traditional deterministic optimisers. The role of Game Strategies (GS), such as Pareto, Nash or Stackelberg games as companions or pre-conditioners of Multi objective Optimizers is presented and discussed on simple mathematical functions in Part I , as well as their implementations on simple aeronautical model optimisation problems on the computer using a friendly design framework in Part II. Real life (robust) design applications dealing with UAVs systems or Civil Aircraft and using the EAs and Game Strategies combined material of Part I & Part II are solved and discussed in Part III providing the designer new compromised solutions useful to digital aircraft design and manufacturing. Many details related to Lectures notes Part I, Part II and Part III can be found by the reader in [68].
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
One-dimensional single crystal incorporating functional nanoparticles of other materials could be an interesting platform for various applications. We studied the encapsulation of nanoparticles into single-crystal ZnO nanorods by exploiting the crystal growth of ZnO in aqueous solution. Two types of nanodiamonds with mean diameters of 10 nm and 40 nm, respectively, and polymer nanobeads with size of 200 nm have been used to study the encapsulation process. It was found that by regrowing these ZnO nanorods with nanoparticles attached to their surfaces, a full encapsulation of nanoparticles into nanorods can be achieved. We demonstrate that our low-temperature aqueous solution growth of ZnO nanorods do not affect or cause degradation of the nanoparticles of either inorganic or organic materials. This new growth method opens the way to a plethora of applications combining the properties of single crystal host and encapsulated nanoparticles. We perform micro-photoluminescence measurement on a single ZnO nanorod containing luminescent nanodiamonds and the spectrum has a different shape from that of naked nanodiamonds, revealing the cavity effect of ZnO nanorod.
Resumo:
In this paper the method of renormalization group (RG) [Phys. Rev. E 54, 376 (1996)] is related to the well-known approximations of Rytov and Born used in wave propagation in deterministic and random media. Certain problems in linear and nonlinear media are examined from the viewpoint of RG and compared with the literature on Born and Rytov approximations. It is found that the Rytov approximation forms a special case of the asymptotic expansion generated by the RG, and as such it gives a superior approximation to the exact solution compared with its Born counterpart. Analogous conclusions are reached for nonlinear equations with an intensity-dependent index of refraction where the RG recovers the exact solution. © 2008 Optical Society of America.
Resumo:
Vertical graphene nanosheets (VGNS) hold great promise for high-performance supercapacitors owing to their excellent electrical transport property, large surface area and in particular, an inherent three-dimensional, open network structure. However, it remains challenging to materialise the VGNS-based supercapacitors due to their poor specific capacitance, high temperature processing, poor binding to electrode support materials, uncontrollable microstructure, and non-cost effective way of fabrication. Here we use a single-step, fast, scalable, and environmentally-benign plasma-enabled method to fabricate VGNS using cheap and spreadable natural fatty precursor butter, and demonstrate the controllability over the degree of graphitization and the density of VGNS edge planes. Our VGNS employed as binder-free supercapacitor electrodes exhibit high specific capacitance up to 230 F g−1 at a scan rate of 10 mV s−1 and >99% capacitance retention after 1,500 charge-discharge cycles at a high current density, when the optimum combination of graphitic structure and edge plane effects is utilised. The energy storage performance can be further enhanced by forming stable hybrid MnO2/VGNS nano-architectures which synergistically combine the advantages from both VGNS and MnO2. This deterministic and plasma-unique way of fabricating VGNS may open a new avenue for producing functional nanomaterials for advanced energy storage devices.
Resumo:
Simple, rapid, catalyst-free synthesis of complex patterns of long, vertically aligned multiwalled carbon nanotubes, strictly confined within mechanically-written features on a Si(1 0 0) surface is reported. It is shown that dense arrays of the nanotubes can nucleate and fully fill the features when the low-temperature microwave plasma is in a direct contact with the surface. This eliminates additional nanofabrication steps and inevitable contact losses in applications associated with carbon nanotube patterns. Using metal catalyst has long been considered essential for the nucleation and growth of surface-supported carbon nanotubes (CNTs) [1] and [2]. Only very recently, the possibility of CNT growth using non-metallic (e.g., oxide [3] and SiC [4]) catalysts or artificially created carbon-enriched surface layers [5] has been demonstrated. However, successful integration of carbon nanostructures into Si-based nanodevice platforms requires catalyst-free growth, as the catalyst nanoparticles introduce contact losses, and their catalytic activity is very difficult to control during the growth [6]. Furthermore, in many applications in microfluidics, biological and molecular filters, electronic, sensor, and energy conversion nanodevices, the CNTs need to be arranged in specific complex patterns [7] and [8]. These patterns need to contain the basic features (e.g., lines and dots) written using simple procedures and fully filled with dense arrays of high-quality, straight, yet separated nanotubes. In this paper, we report on a completely metal or oxide catalyst-free plasma-based approach for the direct and rapid growth of dense arrays of long vertically-aligned multi-walled carbon nanotubes arranged into complex patterns made of various combinations of basic features on a Si(1 0 0) surface written using simple mechanical techniques. The process was conducted in a plasma environment [9] and [10] produced by a microwave discharge which typically generates the low-temperature plasmas at the discharge power below 1 kW [11]. Our process starts from mechanical writing (scribing) a pattern of arbitrary features on pre-treated Si(1 0 0) wafers. Before and after the mechanical feature writing, the Si(1 0 0) substrates were cleaned in an aqueous solution of hydrofluoric acid for 2 min to remove any possible contaminations (such as oil traces which could decompose to free carbon at elevated temperatures) from the substrate surface. A piece of another silicon wafer cleaned in the same way as the substrate, or a diamond scriber were used to produce the growth patterns by a simple arbitrary mechanical writing, i.e., by making linear scratches or dot punctures on the Si wafer surface. The results were the same in both cases, i.e., when scratching the surface by Si or a diamond scriber. The procedure for preparation of the substrates did not involve any possibility of external metallic contaminations on the substrate surface. After the preparation, the substrates were loaded into an ASTeX model 5200 chemical vapour deposition (CVD) reactor, which was very carefully conditioned to remove any residue contamination. The samples were heated to at least 800 °C to remove any oxide that could have formed during the sample loading [12]. After loading the substrates into the reactor chamber, N2 gas was supplied into the chamber at the pressure of 7 Torr to ignite and sustain the discharge at the total power of 200 W. Then, a mixture of CH4 and 60% of N2 gases were supplied at 20 Torr, and the discharge power was increased to 700 W (power density of approximately 1.49 W/cm3). During the process, the microwave plasma was in a direct contact with the substrate. During the plasma exposure, no external heating source was used, and the substrate temperature (∼850 °C) was maintained merely due to the plasma heating. The features were exposed to a microwave plasma for 3–5 min. A photograph of the reactor and the plasma discharge is shown in Fig. 1a and b.
Resumo:
The possibility of effective control of morphology and electrical properties of self-organized graphene structures on plasma-exposed Si surfaces is demonstrated. The structures are vertically standing nanosheets and can be grown without any catalyst and any external heating upon direct contact with high-density inductively coupled plasmas at surface temperatures not exceeding 673–723 K. Study of nucleation and growth dynamics revealed the possibility to switch-over between the two most common (turnstile- and maze-like) morphologies on the same substrates by a simple change of the plasma parameters. This change leads to the continuous or discontinuous native oxide layer that supports self-organized patterns of small carbon nanoparticles on which the structures nucleate. It is shown that by tailoring the nanoparticle arrangement one can create various three-dimensional architectures and networks of graphene nanosheet structures. We also demonstrate effective control of the degree of graphitization of the graphene nanosheet structures from the initial through the final growth stages. This makes it possible to tune the electrical resistivity properties of the produced three-dimensional patterns/networks from strongly dielectric to semiconducting. Our results contribute to enabling direct integration of graphene structures into presently dominant Si-based nanofabrication platform for next-generation nanoelectronic, sensor, biomedical, and optoelectronic components and nanodevices.