928 resultados para Discriminating limits
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We study the process e+e- γγνν̄ in the context of a strong electroweak symmetry breaking model, which can be a source of events with two photons and missing energy at LEP2. We investigate bounds on the model assuming that no deviation is observed from the standard model within a given experimental error.
Discriminating Different Classes of Biological Networks by Analyzing the Graphs Spectra Distribution
Resumo:
The brain's structural and functional systems, protein-protein interaction, and gene networks are examples of biological systems that share some features of complex networks, such as highly connected nodes, modularity, and small-world topology. Recent studies indicate that some pathologies present topological network alterations relative to norms seen in the general population. Therefore, methods to discriminate the processes that generate the different classes of networks (e. g., normal and disease) might be crucial for the diagnosis, prognosis, and treatment of the disease. It is known that several topological properties of a network (graph) can be described by the distribution of the spectrum of its adjacency matrix. Moreover, large networks generated by the same random process have the same spectrum distribution, allowing us to use it as a "fingerprint". Based on this relationship, we introduce and propose the entropy of a graph spectrum to measure the "uncertainty" of a random graph and the Kullback-Leibler and Jensen-Shannon divergences between graph spectra to compare networks. We also introduce general methods for model selection and network model parameter estimation, as well as a statistical procedure to test the nullity of divergence between two classes of complex networks. Finally, we demonstrate the usefulness of the proposed methods by applying them to (1) protein-protein interaction networks of different species and (2) on networks derived from children diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) and typically developing children. We conclude that scale-free networks best describe all the protein-protein interactions. Also, we show that our proposed measures succeeded in the identification of topological changes in the network while other commonly used measures (number of edges, clustering coefficient, average path length) failed.
Resumo:
Changes in mental health care in the city of Fortaleza (Northeastern Brazil) have a recent historical and political process. Compared to other municipalities of the State of Ceara, which in the early 1990s were already pioneers in the process, Fortaleza has not implemented the changes due to the interests of psychiatric hospitals, of psychiatric outpatient clinics of the public network, and because of the difficulty in managing the new mental health devices and equipment present in Primary Care. In the municipality, the reorganization of mental health actions and services has required that the Primary Care Network faces the challenge of assisting mental health problems with the implementation of Matrix Support. In light of this context, we aimed to evaluate Matrix Support in mental health in Primary Care Units and to identify achievements and limitations in the Primary Care Units with Matrix Support. This study used a qualitative approach and was carried out by means of a case study. We interviewed twelve professionals from the Family Health Teams of four Units with implemented Matrix Support. The analysis of the information reveals that access, decision making, participation and the challenges of implementing Matrix Support are elements that are, in a dialectic way, weak and strong in the reorganization of services and practices. The presence of Matrix Support in Primary Care highlights the proposal of dealing with mental health within the network in the municipality. The process has not ended. Mobilization, awareness-raising and qualification of Primary Care have to be enhanced constantly, but implementation has enabled, to the service and professionals, greater acceptance of mental health in Primary Care.
Resumo:
In Brazil, Protected Areas (PAs) are considered the cornerstone for development of national strategies for biodiversity conservation. Considering this point of view we analyzed thirty protected areas belonging to Atlantic Central Corridor of Atlantic Forest in Bahia, aiming to identify and analyze its current level of implementation. Lemos de Sa and Ferreira (2000) methodology which consist of applying a standard scale, where the variation of the level of implementation conforms to a range of 0 to 5 points was used, with appropriate adaptations. After obtaining the data from the implementation level we use the aggregation method of Ward to help visualize the dissimilarity between the protected areas studied. We used an international classification proposed by IUCN (International Union for Conservation of Nature) for that the UCs to be compared with works done in another countries, the UCs considered are in the groups Ia, II, V and VI da IUCN. As results, 50% of protected areas analyzed are reasonably implemented, 40% inadequately implemented, 6.7% are presented only on paper and only 3.3% can be classified as satisfactorily implemented. These areas presents problems in their regularization; deficiency in infrastructure, human and financial resources. Given the results its clear the recurrent fact that conservation areas under study must be effectively implemented and for this to occur environmental policies should be focused on actions to consolidate the goals of conservation strategy.
Resumo:
In this work, the reduction reaction of paraquat herbicide was used to obtain analytical signals using electrochemical techniques of differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry. Analytes were prepared with laboratory purified water and natural water samples (from Mogi-Guacu River, SP). The electrochemical techniques were applied to 1.0 mol L-1 Na2SO4 solutions, at pH 5.5, and containing different concentrations of paraquat, in the range of 1 to 10 mu mol L-1, using a gold ultramicroelectrode. 5 replicate experiments were conducted and in each the mean value for peak currents obtained -0.70 V vs. Ag/AgCl yielded excellent linear relationships with pesticide concentrations. The slope values for the calibration plots (method sensitivity) were 4.06 x 10(-3), 1.07 x 10(-2) and 2.95 x 10(-2) A mol(-1) L for purified water by differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry, respectively. For river water samples, the slope values were 2.60 x 10(-3), 1.06 x 10(-2) and 3.35 x 10(-2) A mol(-1) L, respectively, showing a small interference from the natural matrix components in paraquat determinations. The detection limits for paraquat determinations were calculated by two distinct methodologies, i.e., as proposed by IUPAC and a statistical method. The values obtained with multiple square waves voltammetry were 0.002 and 0.12 mu mol L-1, respectively, for pure water electrolytes. The detection limit from IUPAC recommendations, when inserted in the calibration curve equation, an analytical signal (oxidation current) is smaller than the one experimentally observed for the blank solution under the same experimental conditions. This is inconsistent with the definition of detection limit, thus the IUPAC methodology requires further discussion. The same conclusion can be drawn by the analyses of detection limits obtained with the other techniques studied.
Resumo:
The availability and uptake of Cd by lettuce (Lactuca sativa L.) in two common tropical soils (before and after liming) were studied in order to derive human health-based risk soil concentration. Cadmium concentrations ranging from 1 to 12 mg kg(-1) were added to samples from a clayey Oxisol and a sandy-loam Ultisol under glasshouse conditions. After incubation, a soil sample was taken from each pot, the concentration of Cd in the soil was determined, lettuce was grown during 36 d, and the edible parts were harvested and analyzed for Cd. A positive linear correlation was observed between total soil Cd and the Cd concentration in lettuce. The amount of Cd absorbed by lettuce grown in the Ultisol was about twice the amount absorbed in the Oxisol. Liming increased the soil pH and slightly reduced Cd availability and uptake. CaCl2 extraction was better than DTPA to reflect differences in binding strength of Cd between limed and unlimed soils. Risk Cd concentrations in the Ultisol were lower than in the Oxisol, reflecting the greater degree of uptake from the Ultisol. The derived risk Cd values were dependent on soil type and the exposure scenario.
Resumo:
The nature of the dark matter in the Universe is one of the greatest mysteries in modern astronomy. The neutralino is a nonbaryonic dark matter candidate in minimal supersymmetric extensions to the standard model of particle physics. If the dark matter halo of our galaxy is made up of neutralinos some would become gravitationally trapped inside massive bodies like the Earth. Their pair-wise annihilation produces neutrinos that can be detected by neutrino experiments looking in the direction of the centre of the Earth. The AMANDA neutrino telescope, currently the largest in the world, consists of an array of light detectors buried deep in the Antarctic glacier at the geographical South Pole. The extremely transparent ice acts as a Cherenkov medium for muons passing the array and using the timing information of detected photons it is possible to reconstruct the muon direction. A search has been performed for nearly vertically upgoing neutrino induced muons with AMANDA-B10 data taken over the three year period 1997-99. No excess above the atmospheric neutrino background expectation was found. Upper limits at the 90 % confidence level has been set on the annihilation rate of neutralinos at the centre of the Earth and on the muon flux induced by neutrinos created by the annihilation products.
Resumo:
My work concerns two different systems of equations used in the mathematical modeling of semiconductors and plasmas: the Euler-Poisson system and the quantum drift-diffusion system. The first is given by the Euler equations for the conservation of mass and momentum, with a Poisson equation for the electrostatic potential. The second one takes into account the physical effects due to the smallness of the devices (quantum effects). It is a simple extension of the classical drift-diffusion model which consists of two continuity equations for the charge densities, with a Poisson equation for the electrostatic potential. Using an asymptotic expansion method, we study (in the steady-state case for a potential flow) the limit to zero of the three physical parameters which arise in the Euler-Poisson system: the electron mass, the relaxation time and the Debye length. For each limit, we prove the existence and uniqueness of profiles to the asymptotic expansion and some error estimates. For a vanishing electron mass or a vanishing relaxation time, this method gives us a new approach in the convergence of the Euler-Poisson system to the incompressible Euler equations. For a vanishing Debye length (also called quasineutral limit), we obtain a new approach in the existence of solutions when boundary layers can appear (i.e. when no compatibility condition is assumed). Moreover, using an iterative method, and a finite volume scheme or a penalized mixed finite volume scheme, we numerically show the smallness condition on the electron mass needed in the existence of solutions to the system, condition which has already been shown in the literature. In the quantum drift-diffusion model for the transient bipolar case in one-space dimension, we show, by using a time discretization and energy estimates, the existence of solutions (for a general doping profile). We also prove rigorously the quasineutral limit (for a vanishing doping profile). Finally, using a new time discretization and an algorithmic construction of entropies, we prove some regularity properties for the solutions of the equation obtained in the quasineutral limit (for a vanishing pressure). This new regularity permits us to prove the positivity of solutions to this equation for at least times large enough.
Resumo:
The main work of this thesis concerns the measurement of the production cross section using LHC 2011 data collected at a center-of-mass energy equal to 7 TeV by the ATLAS detector and resulting in a total integrated luminosity of 4.6 inverse fb. The ZZ total cross section is finally compared with the NLO prediction calculated with modern Monte Carlo generators. In addition, the three differential distributions (∆φ(l,l), ZpT and M4l) are shown unfolded back to the underlying distributions using a Bayesian iterative algorithm. Finally, the transverse momentum of the leading Z is used to provide limits on anoumalus triple gauge couplings forbidden in the Standard Model.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
Evidence accumulated in the last ten years has demonstrated that a large proportion of the mitochondrial respiratory chain complexes in a variety of organisms is arranged in supramolecular assemblies called supercomplexes or respirasomes. Besides conferring a kinetic advantage (substrate channeling) and being required for the assembly and stability of Complex I, indirect considerations support the view that supercomplexes may also prevent excessive formation of reactive oxygen species (ROS) from the respiratory chain. Following this line of thought we have decided to directly investigate ROS production by Complex I under conditions in which the complex is arranged as a component of the supercomplex I1III2 or it is dissociated as an individual enzyme. The study has been addressed both in bovine heart mitochondrial membranes and in reconstituted proteoliposomes composed of complexes I and III in which the supramolecular organization of the respiratory assemblies is impaired by: (i) treatment either of bovine heart mitochondria or liposome-reconstituted supercomplex I-III with dodecyl maltoside; (ii) reconstitution of Complexes I and III at high phospholipids to protein ratio. The results of this investigation provide experimental evidence that the production of ROS is strongly increased in either model; supporting the view that disruption or prevention of the association between Complex I and Complex III by different means enhances the generation of superoxide from Complex I . This is the first demonstration that dissociation of the supercomplex I1III2 in the mitochondrial membrane is a cause of oxidative stress from Complex I. Previous work in our laboratory demonstrated that lipid peroxidation can dissociate the supramolecular assemblies; thus, here we confirm that preliminary conclusion that primary causes of oxidative stress may perpetuate reactive oxygen species (ROS) generation by a vicious circle involving supercomplex dissociation as a major determinant.
Resumo:
Measurements of the self coupling between bosons are important to test the electroweak sector of the Standard Model (SM). The production of pairs of Z bosons through the s-channel is forbidden in the SM. The presence of physics, beyond the SM, could lead to a deviation of the expected production cross section of pairs of Z bosons due to the so called anomalous Triple Gauge Couplings (aTGC). Proton-proton data collisions at the Large Hadron Collider (LHC) recorded by the ATLAS detector at a center of mass energy of 8 TeV were analyzed corresponding to an integrated luminosity of 20.3 fb-1. Pairs of Z bosons decaying into two electron-positron pairs are searched for in the data sample. The effect of the inclusion of detector regions corresponding to high values of the pseudorapidity was studied to enlarge the phase space available for the measurement of the ZZ production. The number of ZZ candidates was determined and the ZZ production cross section was measured to be: rn7.3±1.0(Stat.)±0.4(Sys.)±0.2(lumi.)pb, which is consistent with the SM expectation value of 7.2±0.3pb. Limits on the aTGCs were derived using the observed yield, which are twice as stringent as previous limits obtained by ATLAS at a center of mass energy of 7 TeV.
Resumo:
The primary goal of this work is related to the extension of an analytic electro-optical model. It will be used to describe single-junction crystalline silicon solar cells and a silicon/perovskite tandem solar cell in the presence of light-trapping in order to calculate efficiency limits for such a device. In particular, our tandem system is composed by crystalline silicon and a perovskite structure material: metilammoniumleadtriiodide (MALI). Perovskite are among the most convenient materials for photovoltaics thanks to their reduced cost and increasing efficiencies. Solar cell efficiencies of devices using these materials increased from 3.8% in 2009 to a certified 20.1% in 2014 making this the fastest-advancing solar technology to date. Moreover, texturization increases the amount of light which can be absorbed through an active layer. Using Green’s formalism it is possible to calculate the photogeneration rate of a single-layer structure with Lambertian light trapping analytically. In this work we go further: we study the optical coupling between the two cells in our tandem system in order to calculate the photogeneration rate of the whole structure. We also model the electronic part of such a device by considering the perovskite top cell as an ideal diode and solving the drift-diffusion equation with appropriate boundary conditions for the silicon bottom cell. We have a four terminal structure, so our tandem system is totally unconstrained. Then we calculate the efficiency limits of our tandem including several recombination mechanisms such as Auger, SRH and surface recombination. We focus also on the dependence of the results on the band gap of the perovskite and we calculare an optimal band gap to optimize the tandem efficiency. The whole work has been continuously supported by a numerical validation of out analytic model against Silvaco ATLAS which solves drift-diffusion equations using a finite elements method. Our goal is to develop a simpler and cheaper, but accurate model to study such devices.