951 resultados para Non-commutative Landau problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Low back pain is a relevant public health problem, being an important cause of work absenteeism worldwide, as well as affecting the quality of life of sufferers and their individual functional performances. Supervised active physical routines and of cognitive-behavioral therapies are recommended for the treatment of chronic Low back pain, although evidence to support the effectiveness of different techniques is missing. Accordingly, the aim of this study is to contrast the effectiveness of two types of exercises, graded activity or supervised, in decreasing symptoms of chronic low back pain. Methods/design Sample will consist of 66 patients, blindly allocated into one of two groups: 1) Graded activity which, based on an operant approach, will use time-contingent methods aiming to increase participants’ activity levels; 2) Supervised exercise, where participants will be trained for strengthening, stretching, and motor control targeting different muscle groups. Interventions will last one hour, and will happen twice a week for 6 weeks. Outcomes (pain, disability, quality of life, global perceived effect, return to work, physical activity, physical capacity, and kinesiophobia) will be assessed at baseline, at treatment end, and three and six months after treatment end. Data collection will be conducted by an investigator blinded to treatment allocation. Discussion This project describes the randomisation method that will be used to compare the effectiveness of two different treatments for chronic low back pain: graded activity and supervised exercises. Since optimal approach for patients with chronic back pain have yet not been defined based on evidence, good quality studies on the subject are necessary. Trial registration NCT01719276

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the field of study related to the stability analysis of fluid saturated porous media is investigated. In particular the contribution of the viscous heating to the onset of convective instability in the flow through ducts is analysed. In order to evaluate the contribution of the viscous dissipation, different geometries, different models describing the balance equations and different boundary conditions are used. Moreover, the local thermal non-equilibrium model is used to study the evolution of the temperature differences between the fluid and the solid matrix in a thermal boundary layer problem. On studying the onset of instability, different techniques for eigenvalue problems has been used. Analytical solutions, asymptotic analyses and numerical solutions by means of original and commercial codes are carried out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this proposal is to explain the paradigm of the American foreign policy during the Johnson Administration, especially toward Europe, within the NATO framework, and toward URSS, in the context of the détente, just emerged during the decade of the sixties. During that period, after the passing of the J. F. Kennedy, President L. B. Johnson inherited a complex and very high-powered world politics, which wanted to get a new phase off the ground in the transatlantic relations and share the burden of the Cold war with a refractory Europe. Known as the grand design, it was a policy that needed the support of the allies and a clear purpose which appealed to the Europeans. At first, President Johnson detected in the problem of the nuclear sharing the good deal to make with the NATO allies. At the same time, he understood that the United States needed to reassert their leadeship within the new stage of relations with the Soviet Union. Soon, the “transatlantic bargain” became something not so easy to dealt with. The Federal Germany wanted to say a word in the nuclear affairs and, why not, put the finger on the trigger of the atlantic nuclear weapons. URSS, on the other hand, wanted to keep Germany down. The other allies did not want to share the onus of the defense of Europe, at most the responsability for the use of the weapons and, at least, to participate in the decision-making process. France, which wanted to detach herself from the policy of the United States and regained a world role, added difficulties to the manage of this course of action. Through the years of the Johnson’s office, the divergences of the policies placed by his advisers to gain the goal put the American foreign policy in deep water. The withdrawal of France from the organization but not from the Alliance, give Washington a chance to carry out his goal. The development of a clear-cut disarm policy leaded the Johnson’s administration to the core of the matter. The Non-proliferation Treaty signed in 1968, solved in a business-like fashion the problem with the allies. The question of nuclear sharing faded away with the acceptance of more deep consultative role in the nuclear affairs by the allies, the burden for the defense of Europe became more bearable through the offset agreement with the FRG and a new doctrine, the flexible response, put an end, at least formally, to the taboo of the nuclear age. The Johnson’s grand design proved to be different from the Kennedy’s one, but all things considered, it was more workable. The unpredictable result was a real détente with the Soviet Union, which, we can say, was a merit of President Johnson.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this work is the diffusion of turbulence in a non-turbulent flow. Such phenomenon can be found in almost every practical case of turbulent flow: all types of shear flows (wakes, jet, boundary layers) present some boundary between turbulence and the non-turbulent surround; all transients from a laminar flow to turbulence must account for turbulent diffusion; mixing of flows often involve the injection of a turbulent solution in a non-turbulent fluid. The mechanism of what Phillips defined as “the erosion by turbulence of the underlying non-turbulent flow”, is called entrainment. It is usually considered to operate on two scales with different mechanics. The small scale nibbling, which is the entrainment of fluid by viscous diffusion of turbulence, and the large scale engulfment, which entraps large volume of flow to be “digested” subsequently by viscous diffusion. The exact role of each of them in the overall entrainment rate is still not well understood, as it is the interplay between these two mechanics of diffusion. It is anyway accepted that the entrainment rate scales with large properties of the flow, while is not understood how the large scale inertial behavior can affect an intrinsically viscous phenomenon as diffusion of vorticity. In the present work we will address then the problem of turbulent diffusion through pseudo-spectral DNS simulations of the interface between a volume of decaying turbulence and quiescent flow. Such simulations will give us first hand measures of velocity, vorticity and strains fields at the interface; moreover the framework of unforced decaying turbulence will permit to study both spatial and temporal evolution of such fields. The analysis will evidence that for this kind of flows the overall production of enstrophy , i.e. the square of vorticity omega^2 , is dominated near the interface by the local inertial transport of “fresh vorticity” coming from the turbulent flow. Viscous diffusion instead plays a major role in enstrophy production in the outbound of the interface, where the nibbling process is dominant. The data from our simulation seems to confirm the theory of an inertially stirred viscous phenomenon proposed by others authors before and provides new data about the inertial diffusion of turbulence across the interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the heat flux through a domain with subregions in which the thermal capacity approaches zero. In these subregions the parabolic heat equation degenerates to an elliptic one. We show the well-posedness of such parabolic-elliptic differential equations for general non-negative L-infinity-capacities and study the continuity of the solutions with respect to the capacity, thus giving a rigorous justification for modeling a small thermal capacity by setting it to zero. We also characterize weak directional derivatives of the temperature with respect to capacity as solutions of related parabolic-elliptic problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il problema della consegna di prodotti da un deposito/impianto ai clienti mediante una flotta di automezzi è un problema centrale nella gestione di una catena di produzione e distribuzione (supply chain). Questo problema, noto in letteratura come Vehicle Routing Problem (VRP), nella sua versione più semplice consiste nel disegnare per ogni veicolo disponibile presso un dato deposito aziendale un viaggio (route) di consegna dei prodotti ai clienti, che tali prodotti richiedono, in modo tale che (i) la somma delle quantità richieste dai clienti assegnati ad ogni veicolo non superi la capacità del veicolo, (ii) ogni cliente sia servito una ed una sola volta, (iii) sia minima la somma dei costi dei viaggi effettuati dai veicoli. Il VRP è un problema trasversale ad una molteplicità di settori merceologici dove la distribuzione dei prodotti e/o servizi avviene mediante veicoli su gomma, quali ad esempio: distribuzione di generi alimentari, distribuzione di prodotti petroliferi, raccolta e distribuzione della posta, organizzazione del servizio scuolabus, pianificazione della manutenzione di impianti, raccolta rifiuti, etc. In questa tesi viene considerato il Multi-Trip VRP, in cui ogni veicolo può eseguire un sottoinsieme di percorsi, chiamato vehicle schedule (schedula del veicolo), soggetto a vincoli di durata massima. Nonostante la sua importanza pratica, il MTVRP ha ricevuto poca attenzione in letteratura: sono stati proposti diversi metodi euristici e un solo algoritmo esatto di risoluzione, presentato da Mingozzi, Roberti e Toth. In questa tesi viene presentato un metodo euristico in grado di risolvere istanze di MTVRP in presenza di vincoli reali, quali flotta di veicoli non omogenea e time windows. L’euristico si basa sul modello di Prins. Sono presentati inoltre due approcci di local search per migliorare la soluzione finale. I risultati computazionali evidenziano l’efficienza di tali approcci.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Chapter 1 I will present a brief introduction on the state of art of nanotechnologies, nanofabrication techniques and unconventional lithography as a technique to fabricate the novel electronic device as resistive switch so-called memristor is shown. In Chapter 2 a detailed description of the main fabrication and characterization techniques employed in this work is reported. Chapter 3 parallel local oxidation lithography (pLOx) describes as a main technique to obtain accurate patterning process. All the effective parameters has been studied and the optimized condition observed to highly reproducible with excellent patterned nanostructures. The effect of negative bias, calls local reduction (LR) studied. Moreover, the use of AC bias shows faster patterning process respect to DC bias. In Chapter 4 (metal/ e-SiO2/ Si nanojunction) it is shown how the electrochemical oxide nanostructures by using pLOx can be used in the fabrication of novel devices call memristor. We demonstrate a new concept, based on conventional materials, where the lifetime problem is resolved by introducing a “regeneration” step, which restores the nano-memristor to its pristine condition by applying an appropriate voltage cycle. In Chapter 5 (Graphene/ e-SiO2/ Si), Graphene as a building block material is used as an electrode to selectively oxidize the silicon substrate by pLOx set up for the fabrication of novel resistive switch device. In Chapter 6 (surface architecture) I will show another application of pLOx in biotechnology is shown. So the surface functionalization combine with nano-patterning by pLOx used to design a new surface to accurately bind biomolecules with the possibility of studying those properties and more application in nano-bio device fabrication. So, in order to obtain biochips, electronic and optical/photonics devices Nano patterning of DNA used as scaffolds to fabricate small functional nano-components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental decay in porous masonry materials, such as brick and mortar, is a widespread problem concerning both new and historic masonry structures. The decay mechanisms are quite complex dependng upon several interconnected parameters and from the interaction with the specific micro-climate. Materials undergo aesthetical and substantial changes in character but while many studies have been carried out, the mechanical aspect has been largely understudied while it bears true importance from the structural viewpoint. A quantitative assessment of the masonry material degradation and how it affects the load-bearing capacity of masonry structures appears missing. The research work carried out, limiting the attention to brick masonry addresses this issue through an experimental laboratory approach via different integrated testing procedures, both non-destructive and mechanical, together with monitoring methods. Attention was focused on transport of moisture and salts and on the damaging effects caused by the crystallization of two different salts, sodium chloride and sodium sulphate. Many series of masonry specimens, very different in size and purposes were used to track the damage process since its beginning and to monitor its evolution over a number of years Athe same time suitable testing techniques, non-destructive, mini-invasive, analytical, of monitoring, were validated for these purposes. The specimens were exposed to different aggressive agents (in terms of type of salt, of brine concentration, of artificial vs. open-air natural ageing, …), tested by different means (qualitative vs. quantitative, non destructive vs. mechanical testing, punctual vs. wide areas, …), and had different size (1-, 2-, 3-header thick walls, full-scale walls vs. small size specimens, brick columns and triplets vs. small walls, masonry specimens vs. single units of brick and mortar prisms, …). Different advanced testing methods and novel monitoring techniques were applied in an integrated holistic approach, for quantitative assessment of masonry health state.