929 resultados para Maximum nodal injection
Resumo:
Abstract Background The ability to manipulate the genetic networks underlying the physiological and behavioural repertoires of the adult honeybee worker (Apis mellifera) is likely to deepen our understanding of issues such as learning and memory generation, ageing, and the regulatory anatomy of social systems in proximate as well as evolutionary terms. Here we assess two methods for probing gene function by RNA interference (RNAi) in adult honeybees. Results The vitellogenin gene was chosen as target because its expression is unlikely to have a phenotypic effect until the adult stage in bees. This allowed us to introduce dsRNA in preblastoderm eggs without affecting gene function during development. Of workers reared from eggs injected with dsRNA derived from a 504 bp stretch of the vitellogenin coding sequence, 15% had strongly reduced levels of vitellogenin mRNA. When dsRNA was introduced by intra-abdominal injection in newly emerged bees, almost all individuals (96 %) showed the mutant phenotype. An RNA-fragment with an apparent size similar to the template dsRNA was still present in this group after 15 days. Conclusion Injection of dsRNA in eggs at the preblastoderm stage seems to allow disruption of gene function in all developmental stages. To dissect gene function in the adult stage, the intra-abdominal injection technique seems superior to egg injection as it gives a much higher penetrance, it is much simpler, and it makes it possible to address genes that are also expressed in the embryonic, larval or pupal stages.
Resumo:
The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter) among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem). Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.
Resumo:
PURPOSE: To present a review about a comparative study of bile duct ligation versus carbon tetrachloride Injection for inducing experimental liver cirrhosis. METHODS: This research was made through Medline/PubMed and SciELO web sites looking for papers on the content "induction of liver cirrhosis in rats". We have found 107 articles but only 30 were selected from 2004 to 2011. RESULTS: The most common methods used for inducing liver cirrhosis in the rat were administration of carbon tetrachloride (CCl4) and bile duct ligation (BDL). CCl4 has induced cirrhosis from 36 hours to 18 weeks after injection and BDL from seven days to four weeks after surgery. CONCLUSION: For a safer inducing cirrhosis method BDL is better than CCl4 because of the absence of toxicity for researches and shorter time for achieving it.
Resumo:
Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor) buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction) model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984) [10]) are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.
Resumo:
OBJECTIVE: Define and compare numbers and types of occlusal contacts in maximum intercuspation. METHODS: The study consisted of clinical and photographic analysis of occlusal contacts in maximum intercuspation. Twenty-six Caucasian Brazilian subjects were selected before orthodontic treatment, 20 males and 6 females, with ages ranging between 12 and 18 years. The subjects were diagnosed and grouped as follows: 13 with Angle Class I malocclusion and 13 with Angle Class II Division 1 malocclusion. After analysis, the occlusal contacts were classified according to the established criteria as: tripodism, bipodism, monopodism (respectively, three, two or one contact point with the slope of the fossa); cuspid to a marginal ridge; cuspid to two marginal ridges; cuspid tip to opposite inclined plane; surface to surface; and edge to edge. RESULTS: The mean number of occlusal contacts per subject in Class I malocclusion was 43.38 and for Class II Division 1 malocclusion it was 44.38, this difference was not statistically significant (p>0.05). CONCLUSIONS: There is a variety of factors that influence the number of occlusal contacts between a Class I and a Class II, Division 1 malocclusion. There is no standardization of occlusal contact type according to the studied malocclusions. A proper selection of occlusal contact types such as cuspid to fossa or cuspid to marginal ridge and its location in the teeth should be individually defined according to the demands of each case. The existence of an adequate occlusal contact leads to a correct distribution of forces, promoting periodontal health.
Resumo:
To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the rst two moments of the lnA distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean lnA and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.
Resumo:
The most typical maximum tests for measuring leg muscle performance are the one-repetition maximum leg press test (1RMleg) and the isokinetic knee extension/flexion (IKEF) test. Nevertheless, their inter-correlations have not been well documented, mainly the predicted values of these evaluations. This correlational and regression analysis study involved 30 healthy young males aged 18-24y, who have performed both tests. Pearson's product moment correlation between 1RMleg and IKEF varied from 0.20 to 0.69 and the more exact predicted test was to 1RMleg (R2 = 0.71). The study showed correlations between 1RMleg and IKEF although these tests are different (isotonic vs. isokinetic) and provided further support for cross determination of 1RMleg and IKEF by linear and multiple linear regression analysis.
Resumo:
<p>[EN]The present study aimed to determine the spawning efficacy, egg quality and quantity of captive breed meagre induced with a single gonadotrophin-releasing hormone agonist (GnRHa) injection of 0, 1, 5, 10, 15, 20, 25, 30, 40 or 50 μg kg–1 to determine a recommended optimum dose to induce spawning. The doses 10, 15 and 20 μg kg–1 gave eggs with the highest quality (measured as: percentage of viability, floating, fertilisation and hatch) and quantity (measured as: total number of eggs, number of viable eggs, number of floating eggs, number of hatched larvae and number of larvae that reabsorbed the yolk sac). All egg quantity parameters were described by Gaussian regression analysis with R2 = 0.89 or R2 = 0.88. The Gaussian regression analysis identified that the optimal dose used was 15 μg kg–1. The regression analysis highlighted that this comprehensive study examined doses that ranged from low doses insufficient to stimulate a high spawning response (significantly lower egg quantities, p < 0.05) compared to 15 μg kg–1 through to high doses that stimulated the spawning of significantly lower egg quantities and eggs with significantly lower quality (egg viability). In addition, the latency period (time from hormone application to spawning) decreased with increasing doses to give a regression (R2 = 0.93), which suggests that higher doses accelerated oocyte development that in turn reduced egg quality and quantity. The identification of an optimal dose for the spawning of meagre, which has high aquaculture potential, represents an important advance for the Mediterranean aquaculture industry.</p>
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a revision of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through G turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime < 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: Is it possible to model self-consistently the evolution of these sources together with that of the parent clusters? How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfvenic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfven waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semianalytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical G strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a revision of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z 0.05 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z 0.20.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z 0.05 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an averaged size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last geometrical MH RH correlation allows us to observationally overcome the limitation of the average size of Radio Halos. Thus in this Chapter, by making use of this geometrical correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR RH, PR MH, PR T, PR LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Caratheodory type. The Carnot-Caratheodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling X-ellipticity and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincare inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Gutierrez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of quasi-boundedness. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the wellknown Moores law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional bulk architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; new architectures as SiliconOnInsulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control ShortChannelEffects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with bandgap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunctionbipolartransistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on doublegate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buriedoxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe selfheating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for highperformance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of selfheating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, threedimensional electrothermal simulations have been applied to the analysis of SHE in planar single and doublegate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the highperformance transistor of the 45 nm technology node. Its effects on the ONcurrent, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on selfheating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
Miglioramento delle prestazioni del modello mono-compartimentale del maximum slope dovuto all'introduzione di sistemi per l'eliminazione degli outliers.
Resumo:
Inspired by the need for a representation of the biomass burning emissions injection height in the ECHAM/MESSy Atmospheric Chemistry model (EMAC)