950 resultados para Scaling sStrategies
Resumo:
Critical exponents that describe a transition from unlimited to limited diffusion for a ratchet system are obtained analytically and numerically. The system is described by a two dimensional nonlinear mapping with three relevant control parameters. Two of them control the non-linearity while the third one controls the intensity of the dissipation. Chaotic attractors appear in the phase space due to the dissipation and considering large non-linearity are characterised by the use of Lyapunov exponents. The critical exponents are used to overlap different curves of average momentum (dynamical variable) onto a single plot confirming a scale invariance. The formalism used is general and the procedure can be extended to different systems.
Resumo:
The aim of this study was to evaluate the effects of different power parameters of an Erbium, Cromium: Yttrium, Scandium, Gallium, Garnet laser (Er,Cr:YSGG laser) on the morphology, attachment of blood components (ABC), roughness, and wear on irradiated root surfaces. Sixty-five incisive bovine teeth were used in this study, 35 of which were used for the analysis of root surface morphology and ABC. The remaining 30 teeth were used for roughness and root wear analysis. The samples were randomly allocated into seven groups: G1: Er,Cr:YSGG laser, 0.5 W; G2: Er,Cr:YSGG laser, 1.0 W; G3: Er,Cr:YSGG laser, 1.5 W; G4: Er,Cr:YSGG laser, 2.0 W; G5: Er,Cr:YSGG laser, 2.5 W; G6: Er,Cr:YSGG laser, 3.0 W; G7: scaling and root planning (SRP) with manual curettes. The root surfaces irradiated by Er,Cr:YSGG at 1.0 W and scaling with manual curettes presented the highest degrees of ABC. The samples irradiated by the Er,Cr:YSGG laser were rougher than the samples treated by the manual curette, and increasing the laser power parameters caused more root wear and greater roughness on the root surface. The Er,Cr:YSGG laser is safe to use for periodontal treatment, but it is not appropriate to use irradiation greater than 1.0 W for this purpose. Microsc. Res. Tech. 78:529–535, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Spreadsheets are widely used but often contain faults. Thus, in prior work we presented a data-flow testing methodology for use with spreadsheets, which studies have shown can be used cost-effectively by end-user programmers. To date, however, the methodology has been investigated across a limited set of spreadsheet language features. Commercial spreadsheet environments are multiparadigm languages, utilizing features not accommodated by our prior approaches. In addition, most spreadsheets contain large numbers of replicated formulas that severely limit the efficiency of data-flow testing approaches. We show how to handle these two issues with a new data-flow adequacy criterion and automated detection of areas of replicated formulas, and report results of a controlled experiment investigating the feasibility of our approach.
Resumo:
Using theoretical arguments, a simple scaling law for the size of the intrinsic rotation observed in tokamaks in the absence of a momentum injection is found: The velocity generated in the core of a tokamak must be proportional to the ion temperature difference in the core divided by the plasma current, independent of the size of the device. The constant of proportionality is of the order of 10 km . s(-1) . MA . keV(-1). When the intrinsic rotation profile is hollow, i.e., it is countercurrent in the core of the tokamak and cocurrent in the edge, the scaling law presented in this Letter fits the data remarkably well for several tokamaks of vastly different size and heated by different mechanisms.
Resumo:
In this paper we investigate the quantum phase transition from magnetic Bose Glass to magnetic Bose-Einstein condensation induced by amagnetic field in NiCl2 center dot 4SC(NH2)(2) (dichloro-tetrakis-thiourea-nickel, or DTN), doped with Br (Br-DTN) or site diluted. Quantum Monte Carlo simulations for the quantum phase transition of the model Hamiltonian for Br-DTN, as well as for site-diluted DTN, are consistent with conventional scaling at the quantum critical point and with a critical exponent z verifying the prediction z = d; moreover the correlation length exponent is found to be nu = 0.75(10), and the order parameter exponent to be beta = 0.95(10). We investigate the low-temperature thermodynamics at the quantum critical field of Br-DTN both numerically and experimentally, and extract the power-law behavior of the magnetization and of the specific heat. Our results for the exponents of the power laws, as well as previous results for the scaling of the critical temperature to magnetic ordering with the applied field, are incompatible with the conventional crossover-scaling Ansatz proposed by Fisher et al. [Phys. Rev. B 40, 546 (1989)]. However they can all be reconciled within a phenomenological Ansatz in the presence of a dangerously irrelevant operator.
Resumo:
Mitochondria must grow with the growing cell to ensure proper cellular physiology and inheritance upon division. We measured the physical size of mitochondrial networks in budding yeast and found that mitochondrial network size increased with increasing cell size and that this scaling relation occurred primarily in the bud. The mitochondria-to-cell size ratio continually decreased in aging mothers over successive generations. However, regardless of the mother's age or mitochondrial content, all buds attained the same average ratio. Thus, yeast populations achieve a stable scaling relation between mitochondrial content and cell size despite asymmetry in inheritance.
Resumo:
The escape dynamics of a classical light ray inside a corrugated waveguide is characterised by the use of scaling arguments. The model is described via a two-dimensional nonlinear and area preserving mapping. The phase space of the mapping contains a set of periodic islands surrounded by a large chaotic sea that is confined by a set of invariant tori. When a hole is introduced in the chaotic sea, letting the ray escape, the histogram of frequency of the number of escaping particles exhibits rapid growth, reaching a maximum value at n(p) and later decaying asymptotically to zero. The behaviour of the histogram of escape frequency is characterised using scaling arguments. The scaling formalism is widely applicable to critical phenomena and useful in characterisation of phase transitions, including transitions from limited to unlimited energy growth in two-dimensional time varying billiard problems. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Measurements of the anisotropy parameter v(2) of identified hadrons (pions, kaons, and protons) as a function of centrality, transverse momentum p(T), and transverse kinetic energy KET at midrapidity (vertical bar eta vertical bar < 0.35) in Au + Au collisions at root s(N N) = 200 GeV are presented. Pions and protons are identified up to p(T) = 6 GeV/c, and kaons up to p(T) = 4 GeV/c, by combining information from time-of-flight and aerogel Cerenkov detectors in the PHENIX Experiment. The scaling of v(2) with the number of valence quarks (n(q)) has been studied in different centrality bins as a function of transverse momentum and transverse kinetic energy. A deviation from previously observed quark-number scaling is observed at large values of KET/n(q) in noncentral Au + Au collisions (20-60%), but this scaling remains valid in central collisions (0-10%).
Resumo:
[EN] Precipitation and desert dust event occurrence time series measured in the Canary Islands region are examined with the primary intention of exploring their scaling characteristics as well as their spatial variability in terms of the islands topography and geographical orientation. In particular, the desert dust intrusion regime in the islands is studied in terms of its relationship with visibility. Analysis of dust and rainfall events over the archipelago exhibits distributions in time that obey power laws. Results show that the rain process presents a high clustering and irregular pattern on short timescales and a more scattered structure for long ones. In contrast, dustiness presents a more uniform and dense structure and, consequently, a more persistent behaviour on short timescales. It was observed that the fractal dimension of rainfall events shows an important spatial variability, which increases with altitude, as well as towards northern latitudes and western longitudes.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
In questo lavoro abbiamo studiato la presenza di correzioni, dette unusuali, agli stati eccitati delle teorie conformi. Inizialmente abbiamo brevemente descritto l'approccio di Calabrese e Cardy all'entropia di entanglement nei sistemi unidimensionali al punto critico. Questo approccio permette di ottenere la famosa ed universale divergenza logaritmica di questa quantità. Oltre a questo andamento logaritmico son presenti correzioni, che dipendono dalla geometria su cui si basa l'approccio di Calabrese e Cardy, il cui particolare scaling è noto ed è stato osservato in moltissimi lavori in letteratura. Questo scaling è dovuto alla rottura locale della simmetria conforme, che è una conseguenza della criticità del sistema, intorno a particolari punti detti branch points usati nell'approccio di Calabrese e Cardy. In questo lavoro abbiamo dimostrato che le correzioni all'entropia di entanglement degli stati eccitati della teoria conforme, che può anch'essa essere calcolata tramite l'approccio di Calabrese e Cardy, hanno lo stesso scaling di quelle osservate negli stati fondamentali. I nostri risultati teorici sono stati poi perfettamente confermati dei calcoli numerici che abbiamo eseguito sugli stati eccitati del modello XX. Sono stati inoltre usati risultati già noti per lo stato fondamentale del medesimo modello per poter studiare la forma delle correzioni dei suoi stati eccitati. Questo studio ha portato alla conclusione che la forma delle correzioni nei due differenti casi è la medesima a meno di una funzione universale.
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.