266 resultados para CRITICALITY


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By means of nuclear spin-lattice relaxation rate T-1(-1), we follow the spin dynamics as a function of the applied magnetic field in two gapped quasi-one-dimensional quantum antiferromagnets: the anisotropic spin-chain system NiCl2-4SC(NH2)(2) and the spin-ladder system (C5H12N)(2)CuBr4. In both systems, spin excitations are confirmed to evolve from magnons in the gapped state to spinons in the gapless Tomonaga-Luttinger-liquid state. In between, T-1(-1) exhibits a pronounced, continuous variation, which is shown to scale in accordance with quantum criticality. We extract the critical exponent for T-1(-1), compare it to the theory, and show that this behavior is identical in both studied systems, thus demonstrating the universality of quantum-critical behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabasi-Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two types of mesoscale wind-speed jet and their effects on boundary-layer structure were studied. The first is a coastal jet off the northern California coast, and the second is a katabatic jet over Vatnajökull, Iceland. Coastal regions are highly populated, and studies of coastal meteorology are of general interest for environmental protection, fishing industry, and for air and sea transportation. Not so many people live in direct contact with glaciers but properties of katabatic flows are important for understanding glacier response to climatic changes. Hence, the two jets can potentially influence a vast number of people. Flow response to terrain forcing, transient behavior in time and space, and adherence to simplified theoretical models were examined. The turbulence structure in these stably stratified boundary layers was also investigated. Numerical modeling is the main tool in this thesis; observations are used primarily to ensure a realistic model behavior. Simple shallow-water theory provides a useful framework for analyzing high-velocity flows along mountainous coastlines, but for an unexpected reason. Waves are trapped in the inversion by the curvature of the wind-speed profile, rather than by an infinite stability in the inversion separating two neutral layers, as assumed in the theory. In the absence of blocking terrain, observations of steady-state supercritical flows are not likely, due to the diurnal variation of flow criticality. In many simplified models, non-local processes are neglected. In the flows studied here, we showed that this is not always a valid approximation. Discrepancies between simulated katabatic flow and that predicted by an analytical model are hypothesized to be due to non-local effects, such as surface inhomogeneity and slope geometry, neglected in the theory. On a different scale, a reason for variations in the shape of local similarity scaling functions between studies is suggested to be differences in non-local contributions to the velocity variance budgets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MFA and LCA methodologies were applied to analyse the anthropogenic aluminium cycle in Italy with focus on historical evolution of stocks and flows of the metal, embodied GHG emissions, and potentials from recycling to provide key features to Italy for prioritizing industrial policy toward low-carbon technologies and materials. Historical trend series were collected from 1947 to 2009 and balanced with data from production, manufacturing and waste management of aluminium-containing products, using a ‘top-down’ approach to quantify the contemporary in-use stock of the metal, and helping to identify ‘applications where aluminium is not yet being recycled to its full potential and to identify present and future recycling flows’. The MFA results were used as a basis for the LCA aimed at evaluating the carbon footprint evolution, from primary and electrical energy, the smelting process and the transportation, embodied in the Italian aluminium. A discussion about how the main factors, according to the Kaya Identity equation, they did influence the Italian GHG emissions pattern over time, and which are the levers to mitigate it, it has been also reported. The contemporary anthropogenic reservoirs of aluminium was estimated at about 320 kg per capita, mainly embedded within the transportation and building and construction sectors. Cumulative in-use stock represents approximately 11 years of supply at current usage rates (about 20 Mt versus 1.7 Mt/year), and it would imply a potential of about 160 Mt of CO2eq emissions savings. A discussion of criticality related to aluminium waste recovery from the transportation and the containers and packaging sectors was also included in the study, providing an example for how MFA and LCA may support decision-making at sectorial or regional level. The research constitutes the first attempt of an integrated approach between MFA and LCA applied to the aluminium cycle in Italy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente studio si colloca nell’ambito di una ricerca il cui obiettivo è la formulazione di criteri progettuali finalizzati alla ottimizzazione delle prestazioni energetiche delle cantine di aziende vitivinicole con dimensioni produttive medio-piccole. Nello specifico la ricerca si pone l’obiettivo di individuare degli indicatori che possano valutare l’influenza che le principali variabili progettuali hanno sul fabbisogno energetico dell’edificio e sull’andamento delle temperature all’interno dei locali di conservazione ed invecchiamento del vino. Tali indicatori forniscono informazioni sulla prestazione energetica dell’edificio e sull’idoneità dei locali non climatizzati finalizzata alla conservazione del vino Essendo la progettazione una complessa attività multidisciplinare, la ricerca ha previsto l’ideazione di un programma di calcolo in grado di gestire ed elaborare dati provenienti da diversi ambiti (ingegneristici, architettonici, delle produzioni agroindustriali, ecc.), e di restituire risultati sintetici attraverso indicatori allo scopo individuati. Il programma è stato applicato su un caso-studio aziendale rappresentativo del settore produttivo. Sono stati vagliati gli effetti di due modalità di vendemmia e di quattro soluzioni architettoniche differenti. Le soluzioni edilizie derivano dalla combinazione di diversi isolamenti termici e dalla presenza o meno di locali interrati. Per le analisi sul caso-studio ci si è avvalsi di simulazioni energetiche in regime dinamico, supportate e validate da campagne di monitoraggio termico e meteorologico all’interno dell’azienda oggetto di studio. I risultati ottenuti hanno evidenziato come il programma di calcolo concepito nell’ambito di questo studio individui le criticità dell’edificio in termini energetici e di “benessere termico” del vino e consenta una iterativa revisione delle variabili progettuale indagate. Esso quindi risulta essere uno strumento informatizzato di valutazione a supporto della progettazione, finalizzato ad una ottimizzazione del processo progettuale in grado di coniugare, in maniera integrata, gli obiettivi della qualità del prodotto, della efficienza produttiva e della sostenibilità economica ed ambientale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi riprende un tema che è stato oggetto in passato di studi anche molto approfonditi; oggi sembra essere tornato alla ribalta grazie ad alcuni contributi che hanno nuovamente stimolato la dottrina a confrontarsi su aspetti così delicati anche alla luce della crisi economica. E'stato da sempre rilevato che la buona scrittura delle norme è un fattore fondamentale per il rilancio dell’economia del paese, per la semplificazione e per garantire ordine, coerenza e chiarezza all’ordinamento giuridico. La prima parte è incentrata su una ricostruzione storica e giuridica delle fonti che hanno disciplinato le “regole per la qualità delle regole”, oltre ad una panoramica della dottrina che si è occupata in passato del tema. Segue l’individuazione specifica di quali sono le regole formali e sostanziali di drafting. In particolare, una parte è dedicata alla giurisprudenza costituzionale per comprendere se esiste o meno un aggancio per la Corte Costituzionale da permetterle il sindacato sulle “regole oscure” e dichiararle illegittime. La seconda parte analizza le pressai, in particolare si è scelto di analizzare il rapporto tra Governo e Parlamento nelle problematiche principali che attengono al procedimento legislativo e alla cornice entro la quale viene esplicato in relazione alla decretazione d’urgenza, maxiemendamenti, questione di fiducia, istruttoria in commissione, gruppi di pressione. Ciò che è stato rilevato, è una scarsa aderenza ai principi e ai criteri di better regulation, peraltro difficilmente giustiziabili da parte della Corte costituzionale e sottratti al controllo di chi, al contrario, ha competenza in questo settore, ossia il Comitato per la legislazione e il DAGL. Le conclusioni, pertanto, prendono le mosse da una serie di criticità rilevate e tentano di tracciare una strada da percorrere che sia rispettosa dei canoni della “better regulation” anche alla luce delle riforme costituzionali e dei regolamenti parlamentari in corso di approvazione.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nel corso degli ultimi decenni, ha assunto importanza crescente il tema della sicurezza e dell’affidabilità degli impianti dell’industria di processo. Tramite l’analisi di affidabilità è possibile individuare i componenti critici di un impianto più a rischio. Nel presente lavoro di tesi è stata eseguita l’analisi di affidabilità di tre impianti dello stabilimento SOL di Mantova: l’impianto di vaporizzazione azoto a bassa pressione, l’impianto di vaporizzazione azoto a media pressione e l’impianto di produzione di aria sintetica. A partire dai diagrammi P&ID degli impianti si è effettuata l’analisi delle possibili modalità di guasto degli impianti stessi tramite la tecnica FMECA, acronimo di Failure Modes & Effects Criticality Analisys. Una volta definite le modalità di guasto degli impianti, si è proceduto a quantificarne l’affidabilità utilizzando la tecnica FTA, acronimo di Fault Tree Analisys. I risultati ottenuti dall’analisi degli alberi dei guasti, hanno permesso di individuare gli eventi primari che maggiormente contribuiscono al fallimento dei sistemi studiati, consentendo di formulare ipotesi per l’incremento di affidabilità degli impianti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In questa tesi si è studiato l’insorgere di eventi critici in un semplice modello neurale del tipo Integrate and Fire, basato su processi dinamici stocastici markoviani definiti su una rete. Il segnale neurale elettrico è stato modellato da un flusso di particelle. Si è concentrata l’attenzione sulla fase transiente del sistema, cercando di identificare fenomeni simili alla sincronizzazione neurale, la quale può essere considerata un evento critico. Sono state studiate reti particolarmente semplici, trovando che il modello proposto ha la capacità di produrre effetti "a cascata" nell’attività neurale, dovuti a Self Organized Criticality (auto organizzazione del sistema in stati instabili); questi effetti non vengono invece osservati in Random Walks sulle stesse reti. Si è visto che un piccolo stimolo random è capace di generare nell’attività della rete delle fluttuazioni notevoli, in particolar modo se il sistema si trova in una fase al limite dell’equilibrio. I picchi di attività così rilevati sono stati interpretati come valanghe di segnale neurale, fenomeno riconducibile alla sincronizzazione.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the first time in metallic glasses, we extract both the exponents and scaling functions that describe the nature, statistics, and dynamics of slip events during slow deformation, according to a simple mean field model. We model the slips as avalanches of rearrangements of atoms in coupled shear transformation zones (STZs). Using high temporal resolution measurements, we find the predicted, different statistics and dynamics for small and large slips thereby excluding self-organized criticality. The agreement between model and data across numerous independent measures provides evidence for slip avalanches of STZs as the elementary mechanism of inhomogeneous deformation in metallic glasses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider percolation properties of the Boolean model generated by a Gibbs point process and balls with deterministic radius. We show that for a large class of Gibbs point processes there exists a critical activity, such that percolation occurs a.s. above criticality. For locally stable Gibbs point processes we show a converse result, i.e. they do not percolate a.s. at low activity.