895 resultados para Non-commutative Landau problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study local rigidity and multiplicity of constant scalar curvature metrics in arbitrary products of compact manifolds. Using (equivariant) bifurcation theory we determine the existence of infinitely many metrics that are accumulation points of pairwise non-homothetic solutions of the Yamabe problem. Using local rigidity and some compactness results for solutions of the Yamabe problem, we also exhibit new examples of conformal classes (with positive Yamabe constant) for which uniqueness holds. (C) 2011 Elsevier Masson SAS. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe (braided-) commutative algebras with non-degenerate multiplicative form in certain braided monoidal categories, corresponding to abelian metric Lie algebras (so-called Drinfeld categories). We also describe local modules over these algebras and classify commutative algebras with a finite number of simple local modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animal cruelty is defined as a deliberate action that causes pain and suffering to an animal. In Brazil, legislation known as the Environmental Crimes Law states that cruelty toward all animal species is criminal in nature. From 644 domestic cats necropsied between January 1998 and December 2009, 191 (29.66%) presented lesions highly suggestive of animal cruelty. The main necroscopic finding was exogenous carbamate poisoning (75.39%) followed by blunt-force trauma (21.99%). Cats from 7 months to 2 years of age were the most affected (50.79%). In Brazil, violence is a public health problem and there is a high prevalence of domestic violence. Therefore, even if laws provide for animal welfare and protection, animals are common targets for violent acts. Within a context of social violence, cruelty toward animals is an important parameter to be considered, and the non-accidental lesions that were found are evidence of malicious actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Lambda = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The low efficiency of gene transfer is a recurrent problem in DNA vaccine development and gene therapy studies using non-viral vectors such as plasmid DNA (pDNA). This is mainly due to the fact that during their traffic to the target cell's nuclei, plasmid vectors must overcome a series of physical, enzymatic and diffusional barriers. The main objective of this work is the development of recombinant proteins specifically designed for pDNA delivery, which take advantage of molecular motors like dynein, for the transport of cargos from the periphery to the centrosome of mammalian cells. A DNA binding sequence was fused to the N-terminus of the recombinant human dynein light chain LC8. Expression studies indicated that the fusion protein was correctly expressed in soluble form using E. coli BL21(DE3) strain. As expected, gel permeation assays found the purified protein mainly present as dimers, the functional oligomeric state of LC8. Gel retardation assays and atomic force microscopy proved the ability of the fusion protein to interact and condense pDNA. Zeta potential measurements indicated that LC8 with DNA binding domain (LD4) has an enhanced capacity to interact and condense pDNA, generating positively charged complexes. Transfection of cultured HeLa cells confirmed the ability of the LD4 to facilitate pDNA uptake and indicate the involvement of the retrograde transport in the intracellular trafficking of pDNA: LD4 complexes. Finally, cytotoxicity studies demonstrated a very low toxicity of the fusion protein vector, indicating the potential for in vivo applications. The study presented here is part of an effort to develop new modular shuttle proteins able to take advantage of strategies used by viruses to infect mammalian cells, aiming to provide new tools for gene therapy and DNA vaccination studies. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Low back pain is a relevant public health problem, being an important cause of work absenteeism worldwide, as well as affecting the quality of life of sufferers and their individual functional performances. Supervised active physical routines and of cognitive-behavioral therapies are recommended for the treatment of chronic Low back pain, although evidence to support the effectiveness of different techniques is missing. Accordingly, the aim of this study is to contrast the effectiveness of two types of exercises, graded activity or supervised, in decreasing symptoms of chronic low back pain. Methods/design Sample will consist of 66 patients, blindly allocated into one of two groups: 1) Graded activity which, based on an operant approach, will use time-contingent methods aiming to increase participants’ activity levels; 2) Supervised exercise, where participants will be trained for strengthening, stretching, and motor control targeting different muscle groups. Interventions will last one hour, and will happen twice a week for 6 weeks. Outcomes (pain, disability, quality of life, global perceived effect, return to work, physical activity, physical capacity, and kinesiophobia) will be assessed at baseline, at treatment end, and three and six months after treatment end. Data collection will be conducted by an investigator blinded to treatment allocation. Discussion This project describes the randomisation method that will be used to compare the effectiveness of two different treatments for chronic low back pain: graded activity and supervised exercises. Since optimal approach for patients with chronic back pain have yet not been defined based on evidence, good quality studies on the subject are necessary. Trial registration NCT01719276

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the field of study related to the stability analysis of fluid saturated porous media is investigated. In particular the contribution of the viscous heating to the onset of convective instability in the flow through ducts is analysed. In order to evaluate the contribution of the viscous dissipation, different geometries, different models describing the balance equations and different boundary conditions are used. Moreover, the local thermal non-equilibrium model is used to study the evolution of the temperature differences between the fluid and the solid matrix in a thermal boundary layer problem. On studying the onset of instability, different techniques for eigenvalue problems has been used. Analytical solutions, asymptotic analyses and numerical solutions by means of original and commercial codes are carried out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this proposal is to explain the paradigm of the American foreign policy during the Johnson Administration, especially toward Europe, within the NATO framework, and toward URSS, in the context of the détente, just emerged during the decade of the sixties. During that period, after the passing of the J. F. Kennedy, President L. B. Johnson inherited a complex and very high-powered world politics, which wanted to get a new phase off the ground in the transatlantic relations and share the burden of the Cold war with a refractory Europe. Known as the grand design, it was a policy that needed the support of the allies and a clear purpose which appealed to the Europeans. At first, President Johnson detected in the problem of the nuclear sharing the good deal to make with the NATO allies. At the same time, he understood that the United States needed to reassert their leadeship within the new stage of relations with the Soviet Union. Soon, the “transatlantic bargain” became something not so easy to dealt with. The Federal Germany wanted to say a word in the nuclear affairs and, why not, put the finger on the trigger of the atlantic nuclear weapons. URSS, on the other hand, wanted to keep Germany down. The other allies did not want to share the onus of the defense of Europe, at most the responsability for the use of the weapons and, at least, to participate in the decision-making process. France, which wanted to detach herself from the policy of the United States and regained a world role, added difficulties to the manage of this course of action. Through the years of the Johnson’s office, the divergences of the policies placed by his advisers to gain the goal put the American foreign policy in deep water. The withdrawal of France from the organization but not from the Alliance, give Washington a chance to carry out his goal. The development of a clear-cut disarm policy leaded the Johnson’s administration to the core of the matter. The Non-proliferation Treaty signed in 1968, solved in a business-like fashion the problem with the allies. The question of nuclear sharing faded away with the acceptance of more deep consultative role in the nuclear affairs by the allies, the burden for the defense of Europe became more bearable through the offset agreement with the FRG and a new doctrine, the flexible response, put an end, at least formally, to the taboo of the nuclear age. The Johnson’s grand design proved to be different from the Kennedy’s one, but all things considered, it was more workable. The unpredictable result was a real détente with the Soviet Union, which, we can say, was a merit of President Johnson.