958 resultados para Surface renewal theory
Resumo:
A wire drive pulse echo method of measuring the spectrum of solid bodies described. Using an 's' plane representation, a general analysis of the transient response of such solids has been carried out. This was used for the study of the stepped amplitude transient of high order modes of disks and for the case where there are two adjacent resonant frequencies. The techniques developed have been applied to the measurenent of the elasticities of refractory materials at high temperatures. In the experimental study of the high order in-plane resonances of thin disks it was found that the energy travelled at the edge of the disk and this initiated the work on one dimensional Rayleigh waves.Their properties were established for the straight edge condition by following an analysis similar to that of the two dimensional case. Experiments were then carried out on the velocity dispersion of various circuits including the disk and a hole in a large plate - the negative curvature condition.Theoretical analysis established the phase and group velocities for these cases and experimental tests on aluminium and glass gave good agreement with theory. At high frequencies all velocities approach that of the one dimensional Rayleigh waves. When applied to crack detection it was observed that a signal burst travelling round a disk showed an anomalous amplitude effect. In certain cases the signal which travelled the greater distance had the greater amplitude.An experiment was designed to investigate the phenanenon and it was established that the energy travelled in two nodes with different velocities.It was found by analysis that as well as the Rayleigh surface wave on the edge, a seoond node travelling at about the shear velocity was excited and the calculated results gave reasonable agreement with the experiments.
Resumo:
OBJECTIVES: Persistent contamination of surfaces by spores of Clostridium difficile is a major factor influencing the spread of C. difficile-associated diarrhoea (CDAD) in the clinical setting. In recent years, the antimicrobial efficacy of metal surfaces has been investigated against microorganisms including methicillin-resistant Staphylococcus aureus. This study compared the survival of C. difficile on stainless steel, a metal contact surface widely used in hospitals, and copper surfaces. METHODS: Antimicrobial efficacy was assessed using a carrier test method against dormant spores, germinating spores and vegetative cells of C. difficile (NCTC 11204 and ribotype 027) over a 3 h period in the presence and absence of organic matter. RESULTS: Copper metal eliminated all vegetative cells of C. difficile within 30 min, compared with stainless steel which demonstrated no antimicrobial activity (P < 0.05). Copper significantly reduced the viability of spores of C. difficile exposed to the germinant (sodium taurocholate) in aerobic conditions within 60 min (P < 0.05) while achieving a >or=2.5 log reduction (99.8% reduction) at 3 h. Organic material did not reduce the antimicrobial efficacy of the copper surface (P > 0.05).
Resumo:
It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which accounts for input noise provided that a model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method adds an extra term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable, and sampling this jointly with the network’s weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. This leads to the possibility of training an accurate model of a system using less accurate, or more uncertain, data. This is demonstrated on both the, synthetic, noisy sine wave problem and a real problem of inferring the forward model for a satellite radar backscatter system used to predict sea surface wind vectors.
Resumo:
Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.
Resumo:
A fluid mechanical and electrostatic model for the transport of solute molecules across the vascular endothelial surface glycocalyx layer (EGL) was developed to study the charge effect on the diffusive and convective transport of the solutes. The solute was assumed to be a spherical particle with a constant surface charge density, and the EGL was represented as an array of periodically arranged circular cylinders of like charge, with a constant surface charge density. By combining the fluid mechanical analyses for the flow around a solute suspended in an electrolyte solution and the electrostatic analyses for the free energy of the interaction between the solute and cylinders based on a mean field theory, we estimated the transport coefficients of the solute across the EGL. Both of diffusive and convective transports are reduced compared to those for an uncharged system, due to the stronger exclusion of the solute that results from the repulsive electrostatic interaction. The model prediction for the reflection coefficient for serum albumin agreed well with experimental observations if the charge density in the EGL is ranged from approximately -10 to -30 mEq/l.
Resumo:
The extremely surface sensitive technique of metastable de-excitation spectroscopy (MDS) has been utilized to probe the bonding and reactivity of crotyl alcohol over Pd(111) and provide insight into the selective oxidation pathway to crotonaldehyde. Auger de-excitation (AD) of metastable He (23S) atoms reveals distinct features associated with the molecular orbitals of the adsorbed alcohol, corresponding to emission from the hydrocarbon skeleton, the O n nonbonding, and C═C π states. The O n and C═C π states of the alcohol are reversed when compared to those of the aldehyde. Density functional theory (DFT) calculations of the alcohol show that an adsorption mode with both C═C and O bonds aligned somewhat parallel to the surface is energetically favored at a substrate temperature below 200 K. Density of states calculations for such configurations are in excellent agreement with experimental MDS measurements. MDS revealed oxidative dehydrogenation of crotyl alcohol to crotonaldehyde between 200 and 250 K, resulting in small peak shifts to higher binding energy. Intramolecular changes lead to the opposite assignment of the first two MOs in the alcohol versus the aldehyde, in accordance with DFT and UPS studies of the free molecules. Subsequent crotonaldehyde decarbonylation and associated propylidyne formation above 260 K could also be identified by MDS and complementary theoretical calculations as the origin of deactivation and selectivity loss. Combining MDS and DFT in this way represents a novel approach to elucidating surface catalyzed reaction pathways associated with a “real-world” practical chemical transformation, namely the selective oxidation of alcohols to aldehydes.
Resumo:
It is shown that an asymmetric nanometer-high bump at the fiber surface causes strong localization of whispering gallery modes. Our theory explains and describes the experimentally observed nanobump microresonators in Surface Nanoscale Axial Photonics. © OSA 2015.
Resumo:
2010 Mathematics Subject Classification: 53A07, 53A35, 53A10.
Resumo:
2010 Mathematics Subject Classification: Primary 35J70; Secondary 35J15, 35D05.
Resumo:
Scanning tunneling microscopy, temperature-programmed reaction, near-edge X-ray absorption fine structure spectroscopy, and density functional theory calculations were used to study the adsorption and reactions of phenylacetylene and chlorobenzene on Ag(100). In the absence of solvent molecules and additives, these molecules underwent homocoupling and Sonogashira cross-coupling in an unambiguously heterogeneous mode. Of particular interest is the use of silver, previously unexplored, and chlorobenzene—normally regarded as relatively inert in such reactions. Both molecules adopt an essentially flat-lying conformation for which the observed and calculated adsorption energies are in reasonable agreement. Their magnitudes indicate that in both cases adsorption is predominantly due to dispersion forces for which interaction nevertheless leads to chemical activation and reaction. Both adsorbates exhibited pronounced island formation, thought to limit chemical activity under the conditions used and posited to occur at island boundaries, as was indeed observed in the case of phenylacetylene. The implications of these findings for the development of practical catalytic systems are considered.
Resumo:
Tunable photonic elements at the surface of an optical fiber with piezoelectric core are proposed and analyzed theoretically. These elements are based on whispering gallery modes whose propagation along the fiber is fully controlled by nanoscale variation of the effective fiber radius, which can be tuned by means of a piezoelectric actuator embedded into the core. The developed theory allows one to express the introduced effective radius variation through the shape of the actuator and the voltage applied to it. In particular, the designs of a miniature tunable optical delay line and a miniature tunable dispersion compensator are presented. The potential application of the suggested model to the design of a miniature optical buffer is also discussed.
Resumo:
Purpose – The paper aims to explore the gap between theory and practice in foresight and to give some suggestions on how to reduce it. Design/methodology/approach – Analysis of practical foresight activities and suggestions are based on a literature review, the author's own research and practice in the field of foresight and futures studies, and her participation in the work of a European project (COST A22). Findings – Two different types of practical foresight activities have developed. One of them, the practice of foresight of critical futures studies (FCFS) is an application of a theory of futures studies. The other, termed here as praxis foresight (PF), has no theoretical basis and responds directly to practical needs. At present a gap can be perceived between theory and practice. PF distinguishes itself from the practice and theory of FCFS and narrows the construction space of futures. Neither FCFS nor PF deals with content issues of the outer world. Reducing the gap depends on renewal of joint discourses and research about experience of different practical foresight activities and manageability of complex dynamics in foresight. Production and feedback of self-reflective and reflective foresight knowledge could improve theory and practice. Originality/value – Contemporary practical foresight activities are analysed and suggestions to reduce the gap are developed in the context of the linkage between theory and practice. This paper is thought provoking for futurists, foresight managers and university researchers.
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
The last glacial millennial climatic events (i.e. Dansgaard-Oeschger and Heinrich events) constitute outstanding case studies of coupled atmosphere-ocean-cryosphere interactions. Here, we investigate the evolution of sea-surface and subsurface conditions, in terms of temperature, salinity and sea ice cover, at very high-resolution (mean resolution between 55 and 155 years depending on proxies) during the 35-41 ka cal BP interval covering three Dansgaard-Oeschger cycles and including Heinrich event 4, in a new unpublished marine record, i.e. the MD99-2285 core (62.69°N; -3.57s°E). We use a large panel of complementary tools, which notably includes dinocyst-derived sea-ice cover duration quantifications. The high temporal resolution and multiproxy approach of this work allows us to identify the sequence of processes and to assess ocean-cryosphere interactions occurring during these periodic ice-sheet collapse events. Our results evidence a paradoxical hydrological scheme where (i) Greenland interstadials are marked by a homogeneous and cold upper water column, with intensive winter sea ice formation and summer sea ice melting, and (ii) Greenland and Heinrich stadials are characterized by a very warm and low saline surface layer with iceberg calving and reduced sea ice formation, separated by a strong halocline from a less warm and saltier subsurface layer. Our work also suggests that this stadial surface/subsurface warming started before massive iceberg release, in relation with warm Atlantic water advection. These findings thus support the theory that upper ocean warming might have triggered European ice-sheet destabilization. Besides, previous paleoceanographic studies conducted along the Atlantic inflow pathways close to the edge of European ice-sheets suggest that such a feature might have occurred in this whole area. Nonetheless, additional high resolution paleoreconstructions are required to confirm such a regional scheme.
Resumo:
In this paper, we use density functional theory corrected for on-site Coulomb interactions (DFT + U) and hybrid DFT (HSE06 functional) to study the defects formed when the ceria (110) surface is doped with a series of trivalent dopants, namely, Al3+, Sc3+, Y3+, and In 3+. Using the hybrid DFT HSE06 exchange-correlation functional as a benchmark, we show that doping the (110) surface with a single trivalent ion leads to formation of a localized MCe / + O O • (M = the 3+ dopant), O- hole state, confirming the description found with DFT + U. We use DFT + U to investigate the energetics of dopant compensation through formation of the 2MCe ′ +VO ̈ defect, that is, compensation of two dopants with an oxygen vacancy. In conjunction with earlier work on La-doped CeO2, we find that the stability of the compensating anion vacancy depends on the dopant ionic radius. For Al3+, which has the smallest ionic radius, and Sc3+ and In3+, with intermediate ionic radii, formation of a compensating oxygen vacancy is stable. On the other hand, the Y3+ dopant, with an ionic radius close to that of Ce4+, shows a positive anion vacancy formation energy, as does La3+, which is larger than Ce4+ (J. Phys.: Condens. Matter 2010, 20, 135004). When considering the resulting electronic structure, in Al3+ doping, oxygen hole compensation is found. However, Sc 3+, In3+, and Y3+ show the formation of a reduced Ce3+ cation and an uncompensated oxygen hole, similar to La3+. These results suggest that the ionic radius of trivalent dopants strongly influences the final defect formed when doping ceria with 3+ cations. In light of these findings, experimental investigations of these systems will be welcome.