973 resultados para One-point Quadrature
Resumo:
There are several mechanical models to describe the DNA phenomenology. In this work the DNA denaturation is stu- died under thermodynamical and dynamical point of view using the well known Peyrard-Bishop model. The thermody-namics analysis using the transfer integral operator method is briefly reviewed. In particular, the lattice size is discussed and a conjecture about the minimum energy to denaturation is proposed. In terms of the dynamical aspects of the model, the equations of motion for the system are integrated and the results determine the energy density where the denatura- tion occurs. The behavior of the lattice near the phase transition is analyzed. The relation between the thermodynamical and dynamical results is discussed.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper proposes a technique for solving the multiobjective environmental/economic dispatch problem using the weighted sum and ε-constraint strategies, which transform the problem into a set of single-objective problems. In the first strategy, the objective function is a weighted sum of the environmental and economic objective functions. The second strategy considers one of the objective functions: in this case, the environmental function, as a problem constraint, bounded above by a constant. A specific predictor-corrector primal-dual interior point method which uses the modified log barrier is proposed for solving the set of single-objective problems generated by such strategies. The purpose of the modified barrier approach is to solve the problem with relaxation of its original feasible region, enabling the method to be initialized with unfeasible points. The tests involving the proposed solution technique indicate i) the efficiency of the proposed method with respect to the initialization with unfeasible points, and ii) its ability to find a set of efficient solutions for the multiobjective environmental/economic dispatch problem.
Resumo:
We investigated the effects of high pressure on the point of no return or the minimum time required for a kicker to respond to the goalkeeper's dive in a simulated penalty kick task. The goalkeeper moved to one side with different times available for the participants to direct the ball to the opposite side in low-pressure (acoustically isolated laboratory) and high-pressure situations (with a participative audience). One group of participants showed a significant lengthening of the point of no return under high pressure. With less time available, performance was at chance level. Unexpectedly, in a second group of participants, high pressure caused a qualitative change in which for short times available participants were inclined to aim in the direction of the goalkeeper's move. The distinct effects of high pressure are discussed within attentional control theory to reflect a decreasing efficiency of the goal-driven attentional system, slowing down performance, and a decreasing effectiveness in inhibiting stimulus-driven behavior.
Resumo:
We investigate the effects of quenched disorder on first-order quantum phase transitions on the example of the N-color quantum Ashkin-Teller model. By means of a strong-disorder renormalization group, we demonstrate that quenched disorder rounds the first-order quantum phase transition to a continuous one for both weak and strong coupling between the colors. In the strong-coupling case, we find a distinct type of infinite-randomness critical point characterized by additional internal degrees of freedom. We investigate its critical properties in detail and find stronger thermodynamic singularities than in the random transverse field Ising chain. We also discuss the implications for higher spatial dimensions as well as unusual aspects of our renormalization-group scheme. DOI: 10.1103/PhysRevB.86.214204
Resumo:
Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.
Resumo:
Due to rapid and continuous deforestation, recent bird surveys in the Atlantic Forest are following rapid assessment programs to accumulate significant amounts of data during short periods of time. During this study, two surveying methods were used to evaluate which technique rapidly accumulated most species (> 90% of the estimated empirical value) at lowland Atlantic Forests in the state of São Paulo, southeastern Brazil. Birds were counted during the 2008-2010 breeding seasons using 10-minute point counts and 10-species lists. Overall, point counting detected as many species as lists (79 vs. 83, respectively), and 88 points (14.7 h) detected 90% of the estimated species richness. Forty-one lists were insufficient to detect 90% of all species. However, lists accumulated species faster in a shorter time period, probably due to the nature of the point count method in which species detected while moving between points are not considered. Rapid assessment programs in these forests will rapidly detect more species using 10-species lists. Both methods shared 63% of all forest species, but this may be due to spatial and temporal mismatch between samplings of each method.
Resumo:
[EN] As is well known, in any infinite-dimensional Banach space one may find fixed point free self-maps of the unit ball, retractions of the unit ball onto its boundary, contractions of the unit sphere, and nonzero maps without positive eigenvalues and normalized eigenvectors. In this paper, we give upper and lower estimates, or even explicit formulas, for the minimal Lipschitz constant and measure of noncompactness of such maps.
Resumo:
This Ph.D. candidate thesis collects the research work I conducted under the supervision of Prof.Bruno Samor´ı in 2005,2006 and 2007. Some parts of this work included in the Part III have been begun by myself during my undergraduate thesis in the same laboratory and then completed during the initial part of my Ph.D. thesis: the whole results have been included for the sake of understanding and completeness. During my graduate studies I worked on two very different protein systems. The theorical trait d’union between these studies, at the biological level, is the acknowledgement that protein biophysical and structural studies must, in many cases, take into account the dynamical states of protein conformational equilibria and of local physico-chemical conditions where the system studied actually performs its function. This is introducted in the introductory part in Chapter 2. Two different examples of this are presented: the structural significance deriving from the action of mechanical forces in vivo (Chapter 3) and the complexity of conformational equilibria in intrinsically unstructured proteins and amyloid formation (Chapter 4). My experimental work investigated both these examples by using in both cases the single molecule force spectroscopy technique (described in Chapter 5 and Chapter 6). The work conducted on angiostatin focused on the characterization of the relationships between the mechanochemical properties and the mechanism of action of the angiostatin protein, and most importantly their intertwining with the further layer of complexity due to disulfide redox equilibria (Part III). These studies were accompanied concurrently by the elaboration of a theorical model for a novel signalling pathway that may be relevant in the extracellular space, detailed in Chapter 7.2. The work conducted on -synuclein (Part IV) instead brought a whole new twist to the single molecule force spectroscopy methodology, applying it as a structural technique to elucidate the conformational equilibria present in intrinsically unstructured proteins. These equilibria are of utmost interest from a biophysical point of view, but most importantly because of their direct relationship with amyloid aggregation and, consequently, the aetiology of relevant pathologies like Parkinson’s disease. The work characterized, for the first time, conformational equilibria in an intrinsically unstructured protein at the single molecule level and, again for the first time, identified a monomeric folded conformation that is correlated with conditions leading to -synuclein and, ultimately, Parkinson’s disease. Also, during the research work, I found myself in the need of a generalpurpose data analysis application for single molecule force spectroscopy data analysis that could solve some common logistic and data analysis problems that are common in this technique. I developed an application that addresses some of these problems, herein presented (Part V), and that aims to be publicly released soon.
Resumo:
In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
In the present work, the formation and migration of point defects induced by electron irradiation in carbon nanostructures, including carbon onions, nanotubes and graphene layers, were investigated by in-situ TEM. The mobility of carbon atoms normal to the layers in graphitic nanoparticles, the mobility of carbon interstitials inside SWCNTs, and the migration of foreign atoms in graphene layers or in layers of carbon nanotubes were studied. The diffusion of carbon atoms in carbon onions was investigated by annealing carbon onions and observing the relaxation of the compressed clusters in the temperature range of 1200 – 2000oC. An activation energy of 5.0±0.3 eV was obtained. This rather high activation energy for atom exchange between the layers not only prevents the exchange of carbon atoms between the layers at lower temperature but also explains the high morphological and mechanical stability of graphite nanostructures. The migration of carbon atoms in SWCNTs was investigated quantitatively by cutting SWCNT bundles repeatedly with a focused electron beam at different temperatures. A migration barrier of about 0.25 eV was obtained for the diffusion of carbon atoms inside SWCNTs. This is an experimental confirmation of the high mobility of interstitial atoms inside carbon nanotubes, which corroborates previously developed theoretical models of interstitial diffusivity. Individual Au and Pt atoms in one- or two-layered graphene planes and MWCNTs were monitored in real time at high temperatures by high-resolution TEM. The direct observation of the behavior of Au and Pt atoms in graphenic structures in a temperature range of 600 – 700°C allows us to determine the sites occupied by the metal atoms in the graphene layer and the diffusivities of the metal atoms. It was found that metal atoms were located in single or multiple carbon vacancies, not in off-plane positions, and diffused by site exchange with carbon atoms. Metal atoms showed a tendency to form clusters those were stable for a few seconds. An activation energy of around 2.5 eV was obtained for the in-plane migration of both Au and Pt atoms in graphene (two-dimensional diffusion). The rather high activation energy indicates covalent bonding between metal and carbon atoms. Metal atoms were also observed to diffuse along the open edge of graphene layers (one-dimensional diffusion) with a slightly lower activation energy of about 2.3 eV. It is also found that the diffusion of metal atoms in curved graphenic layers of MWCNTs is slightly faster than in planar graphene.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
Groundwater represents one of the most important resources of the world and it is essential to prevent its pollution and to consider remediation intervention in case of contamination. According to the scientific community the characterization and the management of the contaminated sites have to be performed in terms of contaminant fluxes and considering their spatial and temporal evolution. One of the most suitable approach to determine the spatial distribution of pollutant and to quantify contaminant fluxes in groundwater is using control panels. The determination of contaminant mass flux, requires measurement of contaminant concentration in the moving phase (water) and velocity/flux of the groundwater. In this Master Thesis a new solute flux mass measurement approach, based on an integrated control panel type methodology combined with the Finite Volume Point Dilution Method (FVPDM), for the monitoring of transient groundwater fluxes, is proposed. Moreover a new adsorption passive sampler, which allow to capture the variation of solute concentration with time, is designed. The present work contributes to the development of this approach on three key points. First, the ability of the FVPDM to monitor transient groundwater fluxes was verified during a step drawdown test at the experimental site of Hermalle Sous Argentau (Belgium). The results showed that this method can be used, with optimal results, to follow transient groundwater fluxes. Moreover, it resulted that performing FVPDM, in several piezometers, during a pumping test allows to determine the different flow rates and flow regimes that can occurs in the various parts of an aquifer. The second field test aiming to determine the representativity of a control panel for measuring mass flus in groundwater underlined that wrong evaluations of Darcy fluxes and discharge surfaces can determine an incorrect estimation of mass fluxes and that this technique has to be used with precaution. Thus, a detailed geological and hydrogeological characterization must be conducted, before applying this technique. Finally, the third outcome of this work concerned laboratory experiments. The test conducted on several type of adsorption material (Oasis HLB cartridge, TDS-ORGANOSORB 10 and TDS-ORGANOSORB 10-AA), in order to determine the optimum medium to dimension the passive sampler, highlighted the necessity to find a material with a reversible adsorption tendency to completely satisfy the request of the new passive sampling technique.
Resumo:
HIV virulence, i.e. the time of progression to AIDS, varies greatly among patients. As for other rapidly evolving pathogens of humans, it is difficult to know if this variance is controlled by the genotype of the host or that of the virus because the transmission chain is usually unknown. We apply the phylogenetic comparative approach (PCA) to estimate the heritability of a trait from one infection to the next, which indicates the control of the virus genotype over this trait. The idea is to use viral RNA sequences obtained from patients infected by HIV-1 subtype B to build a phylogeny, which approximately reflects the transmission chain. Heritability is measured statistically as the propensity for patients close in the phylogeny to exhibit similar infection trait values. The approach reveals that up to half of the variance in set-point viral load, a trait associated with virulence, can be heritable. Our estimate is significant and robust to noise in the phylogeny. We also check for the consistency of our approach by showing that a trait related to drug resistance is almost entirely heritable. Finally, we show the importance of taking into account the transmission chain when estimating correlations between infection traits. The fact that HIV virulence is, at least partially, heritable from one infection to the next has clinical and epidemiological implications. The difference between earlier studies and ours comes from the quality of our dataset and from the power of the PCA, which can be applied to large datasets and accounts for within-host evolution. The PCA opens new perspectives for approaches linking clinical data and evolutionary biology because it can be extended to study other traits or other infectious diseases.