948 resultados para Mean Intensity of the Claim Process
Resumo:
Objective: To compare individuals with and without hyperhidrosis in terms of the intensity of palmar and plantar sweating. Methods: We selected 50 patients clinically diagnosed with palmoplantar hyperhidrosis and 25 normal individuals as controls. We quantified sweating using a portable noninvasive electronic device that has relative humidity and temperature sensors to measure transepidermal water loss. All of the individuals had a body mass index of 20-25 kg/cm(2). Subjects remained at rest for 20-30 min before the measurements in order to reduce external interference. The measurements were carried out in a climate-controlled environment (21-24 degrees C). Measurements were carried out on the hypothenar region on both hands and on the medial plantar region on both feet. Results: In the palmoplantar hyperhidrosis group, the mean transepidermal water loss on the hands and feet was 133.6 +/- 51.0 g/m(2)/h and 71.8 +/- 40.3 g/m(2)/h, respectively, compared with 37.9 +/- 18.4 g/m(2)/h and 27.6 +/- 14.3 g/m(2)/h, respectively, in the control group. The differences between the groups were statistically significant (p < 0.001 for hands and feet). Conclusions: This method proved to be an accurate and reliable tool to quantify palmar and plantar sweating when performed by a trained and qualified professional.
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
BACKGROUND AND PURPOSE We report on workflow and process-based performance measures and their effect on clinical outcome in Solitaire FR Thrombectomy for Acute Revascularization (STAR), a multicenter, prospective, single-arm study of Solitaire FR thrombectomy in large vessel anterior circulation stroke patients. METHODS Two hundred two patients were enrolled across 14 centers in Europe, Canada, and Australia. The following time intervals were measured: stroke onset to hospital arrival, hospital arrival to baseline imaging, baseline imaging to groin puncture, groin puncture to first stent deployment, and first stent deployment to reperfusion. Effects of time of day, general anesthesia use, and multimodal imaging on workflow were evaluated. Patient characteristics and workflow processes associated with prolonged interval times and good clinical outcome (90-day modified Rankin score, 0-2) were analyzed. RESULTS Median times were onset of stroke to hospital arrival, 123 minutes (interquartile range, 163 minutes); hospital arrival to thrombolysis in cerebral infarction (TICI) 2b/3 or final digital subtraction angiography, 133 minutes (interquartile range, 99 minutes); and baseline imaging to groin puncture, 86 minutes (interquartile range, 24 minutes). Time from baseline imaging to puncture was prolonged in patients receiving intravenous tissue-type plasminogen activator (32-minute mean delay) and when magnetic resonance-based imaging at baseline was used (18-minute mean delay). Extracranial carotid disease delayed puncture to first stent deployment time on average by 25 minutes. For each 1-hour increase in stroke onset to final digital subtraction angiography (or TICI 2b/3) time, odds of good clinical outcome decreased by 38%. CONCLUSIONS Interval times in the STAR study reflect current intra-arterial therapy for patients with acute ischemic stroke. Improving workflow metrics can further improve clinical outcome. CLINICAL TRIAL REGISTRATION: URL http://www.clinicaltrials.gov. Unique identifier: NCT01327989.
Resumo:
Changes in species composition in two 4–ha plots of lowland dipterocarp rainforest at Danum, Sabah, were measured over ten years (1986 to 1996) for trees greater than or equal to 10 cm girth at breast height (gbh). Each included a lower–slope to ridge gradient. The period lay between two drought events of moderate intensity but the forest showed no large lasting responses, suggesting that its species were well adapted to this regime. Mortality and recruitment rates were not unusual in global or regional comparisons. The forest continued to aggrade from its relatively (for Sabah) low basal area in 1986 and, together with the very open upper canopy structure and an abundance of lianas, this suggests a forest in a late stage of recovery from a major disturbance, yet one continually affected by smaller recent setbacks. Mortality and recruitment rates were not related to population size in 1986, but across subplots recruitment was positively correlated with the density and basal area of small trees (10 to <50 cm gbh) forming the dense understorey. Neither rate was related to topography. While species with larger mean gbh had greater relative growth rates (rgr) than smaller ones, subplot mean recruitment rates were correlated with rgr among small trees. Separating understorey species (typically the Euphorbiaceae) from the overstorey (Dipterocarpaceae) showed marked differences in change in mortality with increasing gbh: in the former it increased, in the latter it decreased. Forest processes are centred on this understorey quasi–stratum. The two replicate plots showed a high correspondence in the mortality, recruitment, population changes and growth rates of small trees for the 49 most abundant species in common to both. Overstorey species had higher rgrs than understorey ones, but both showed considerable ranges in mortality and recruitment rates. The supposed trade–off in traits, viz slower rgr, shade tolerance and lower population turnover in the understorey group versus faster potential growth rate, high light responsiveness and high turnover in the overstorey group, was only partly met, as some understorey species were also very dynamic. The forest at Danum, under such a disturbance–recovery regime, can be viewed as having a dynamic equilibrium in functional and structural terms. A second trade–off in shade–tolerance versus drought–tolerance is suggested for among the understorey species. A two–storey (or vertical component) model is proposed where the understorey–overstorey species’ ratio of small stems (currently 2:1) is maintained by a major feedback process. The understorey appears to be an important part of this forest, giving resilience against drought and protecting the overstorey saplings in the long term. This view could be valuable for understanding forest responses to climate change where drought frequency in Borneo is predicted to intensify in the coming decades.
Resumo:
Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.
Resumo:
Diluted nitride self-assembled In(Ga)AsN quantum dots (QDs) grown on GaAs substrates are potential candidates to emit in the windows of maximum transmittance for optical fibres (1.3-1.55 μm). In this paper, we analyse the effect of nitrogen addition on the indium desorption occurring during the capping process of InxGa1−xAs QDs (x = l and 0.7). The samples have been grown by molecular beam epitaxy and studied through transmission electron microscopy (TEM) and photoluminescence techniques. The composition distribution inside the dots was determined by statistical moiré analysis and measured by energy dispersive X-ray spectroscopy. First, the addition of nitrogen in In(Ga)As QDs gave rise to a strong redshift in the emission peak, together with a large loss of intensity and monochromaticity. Moreover, these samples showed changes in the QDs morphology as well as an increase in the density of defects. The statistical compositional analysis displayed a normal distribution in InAs QDs with an average In content of 0.7. Nevertheless, the addition of Ga and/or N leads to a bimodal distribution of the Indium content with two separated QD populations. We suggest that the nitrogen incorporation enhances the indium fixation inside the QDs where the indium/gallium ratio plays an important role in this process. The strong redshift observed in the PL should be explained not only by the N incorporation but also by the higher In content inside the QDs
Resumo:
Structuralism is a theory of U.S. constitutional adjudication according to which courts should seek to improve the decision-making process of the political branches of government so as to render it more democratic.1 In words of John Hart Ely, courts should exercise their judicial-review powers as a ‘representation-reinforcing’ mechanism.2 Structuralism advocates that courts must eliminate the elements of the political decision-making process that are at odds with the structure set out by the authors of the U.S. Constitution. The advantage of this approach, U.S. scholars posit, lies in the fact that it does not require courts to second-guess the policy decisions adopted by the political branches of government. Instead, they limit themselves to enforcing the constitutional structure within which those decisions must be adopted. Of course, this theory of constitutional adjudication, like all theories, has its shortcomings. For example, detractors of structuralism argue that it is difficult, if not impossible, to draw the dividing line between ‘substantive’ and ‘structural’ matters.3 In particular, they claim that, when identifying the ‘structure’ set out by the authors of the U.S. Constitution, courts necessarily base their determinations not on purely structural principles, but on a set of substantive values, evaluating concepts such as democracy, liberty and equality. 4 Without claiming that structuralism should be embraced by the ECJ as the leading theory of judicial review, the purpose of my contribution is to explore how recent case-law reveals that the ECJ has also striven to develop guiding principles which aim to improve the way in which the political institutions of the EU adopt their decisions. In those cases, the ECJ decided not to second-guess the appropriateness of the policy choices made by the EU legislator. Instead, it preferred to examine whether, in reaching an outcome, the EU political institutions had followed the procedural steps mandated by the authors of the Treaties. Stated simply, I argue that judicial deference in relation to ‘substantive outcomes’ has been counterbalanced by a strict ‘process review’. To that effect, I would like to discuss three recent rulings of the ECJ, delivered after the entry into force of the Treaty of Lisbon, where an EU policy measure was challenged indirectly, i.e. via the preliminary reference procedure, namely Vodafone, Volker und Markus Schecke and Test-Achats.5 Whilst in the former case the ECJ ruled that the questions raised by the referring court disclosed no factor of such a kind as to affect the validity of the challenged act, in the latter cases the challenged provisions of an EU act were declared invalid.
Resumo:
A fundamental goal of education is to equip students with self-regulatory capabilities that enable them to educate themselves. Self directedness not only contributes to success in formal instruction but also promotes lifelong learning (Bandura, 1997). The area of research on self-regulated learning is well grounded within the framework of psychological literature attributed to motivation, metacognition, strategy use and learning. This study explored past research and established the purpose of teaching students to self-regulate their learning and highlighted the fact that teachers are expected to assume a major role in the learning process. A student reflective writing journal activity was sustained for a period of two semesters in two fourth-grade mathematics classrooms. The reflective writing journal was analyzed in search of identifying strategies reported by students. Research questions were analyzed using descriptive statistics, frequency counts, cross-tabs and chi-square analyses. ^ Results based on student-use of the journals and teacher interviews indicated that the use of a reflective writing journal does promote self-regulated learning strategies to the extent which the student is engaged in the journaling process. Those students identified as highly self-regulated learners on the basis of their strategy use, were shown to consistently claim to learn math “as well or better than planned” on a weekly basis. Furthermore, good self-regulators were able to recognize specific strategies that helped them do well and change their strategies across time based on the planned learning objectives. The perspectives of the participating teachers were examined in order to establish the context in which the students were working. The effect of “planned change” and/or the resistance to change as established in previous research, from the teachers point of view, was also explored. The analysis of the journal data did establish a significant difference between students who utilized homework as a strategy. ^ Based on the journals and interviews, this study finds that the systematic use of metacognitive, motivational and/or learning strategies can have a positive effect on student's responsiveness to their learning environment. Furthermore, it reflects that teaching students “how to learn” can be a vital part of the effectiveness of any curriculum. ^
Resumo:
Understanding the natural and forced variability of the atmospheric general circulation and its drivers is one of the grand challenges in climate science. It is of paramount importance to understand to what extent the systematic error of climate models affects the processes driving such variability. This is done by performing a set of simulations (ROCK experiments) with an intermediate complexity atmospheric model (SPEEDY), in which the Rocky Mountains orography is increased or decreased to influence the structure of the North Pacific jet stream. For each of these modified-orography experiments, the climatic response to idealized sea surface temperature anomalies of varying intensity in the El Niño Southern Oscillation (ENSO) region is studied. ROCK experiments are characterized by variations in the Pacific jet stream intensity whose extension encompasses the spread of the systematic error found in Coupled Model Intercomparison Project (CMIP6) models. When forced with ENSO-like idealised anomalies, they exhibit a non-negligible sensitivity in the response pattern over the Pacific North American region, indicating that the model mean state can affect the model response to ENSO. It is found that the classical Rossby wave train response to ENSO is more meridionally oriented when the Pacific jet stream is weaker and more zonally oriented with a stronger jet. Rossby wave linear theory suggests that a stronger jet implies a stronger waveguide, which traps Rossby waves at a lower latitude, favouring a zonal propagation of Rossby waves. The shape of the dynamical response to ENSO affects the ENSO impacts on surface temperature and precipitation over Central and North America. A comparison of the SPEEDY results with CMIP6 models suggests a wider applicability of the results to more resources-demanding climate general circulation models (GCMs), opening up to future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in fully-fledged GCMs.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Tooth shade results from the interaction between enamel color, enamel translucency and dentine color. A change in any of these parameters will change a tooth’s color. The objective of this study was to evaluate the changes occurring in enamel translucency during a tooth whitening process. Fourteen human tooth enamel fragments, with a mean thickness of 0.96 mm (± 0.3 mm), were subjected to a bleaching agent (10% carbamide peroxide) 8 hours per day for 28 days. The enamel fragment translucency was measured by a computer controlled spectrophotometer before and after the bleaching agent applications in accordance with ANSI Z80.3-1986 - American National Standard for Ophthalmics - nonprescription sunglasses and fashion eyewear-requirements. The measurements were statistically compared by the Mann-Whitney non-parametric test. A decrease was observed in the translucency of all specimens and, consequently, there was a decrease in transmittance values for all samples. It was observed that the bleaching procedure significantly changes the enamel translucency, making it more opaque.
Resumo:
Nitric oxide (NO) has been considered a key molecule in infammation. OBJECTIVE: The aim of this study was to evaluate the effect of treatment with L-NAME and sodium nitroprussiate, substances that inhibit and release NO, respectively, on tissue tolerance to endodontic irrigants. MATERIAL AND METHODS: The vital dye exudation method was used in a rat subcutaneous tissue model. Injections of 2% Evans blue were administered intravenously into the dorsal penial vein of 14 male rats (200-300 g). The NO inhibitor and donor substances were injected into the subcutaneous tissue in the dorsal region, forming two groups of animals: G1 was inoculated with L-NAME and G2 with sodium nitroprussiate. Both groups received injections of the test endodontic irrigants: acetic acid, 15% citric acid, 17% EDTA-T and saline (control). After 30 min, analysis of the extravasated dye was performed by light absorption spectrophotometry (620 nm). RESULTS: There was statistically signifcant difference (p<0.05) between groups 1 and 2 for all irrigants. L-NAME produced a less intense infammatory reaction and nitroprussiate intensifed this process. CONCLUSIONS: Independently of the administration of NO inhibitors and donors, EDTA-T produced the highest irritating potential in vital tissue among the tested irrigating solutions.
Resumo:
We study how the crossover exponent, phi, between the directed percolation (DP) and compact directed percolation (CDP) behaves as a function of the diffusion rate in a model that generalizes the contact process. Our conclusions are based in results pointed by perturbative series expansions and numerical simulations, and are consistent with a value phi = 2 for finite diffusion rates and phi = 1 in the limit of infinite diffusion rate.
Resumo:
This work describes the infrared spectroscopy characterization and the charge compensation dynamics in supramolecular film FeTPPZFeCN derived from tetra-2-pyridyl-1,4-pyrazine (TPPZ) with hexacyanoferrate, as well as the hybrid film formed by FeTPPZFeCN and polypyrrole (PPy). For supramolecular film, it was found that anion flux is greater in a K+ containing solution than in Li+ solution, which seems to be due to the larger crystalline ionic radius of K+. The electroneutralization process is discussed in terms of electrostatic interactions between cations and metallic centers in the hosting matrix. The nature of the charge compensation process differs from others modified electrodes based on Prussian blue films, where only cations such as K+ participate in the electroneutralization process. In the case of FeTPPZFeCN/PPy hybrid film, the magnitude of the anions’s flux is also dependent on the identity of the anion of the supporting electrolyte.
Resumo:
Shot peening is a cold-working mechanical process in which a shot stream is propelled against a component surface. Its purpose is to introduce compressive residual stresses on component surfaces for increasing the fatigue resistance. This process is widely applied in springs due to the cyclical loads requirements. This paper presents a numerical modelling of shot peening process using the finite element method. The results are compared with experimental measurements of the residual stresses, obtained by the X-rays diffraction technique, in leaf springs submitted to this process. Furthermore, the results are compared with empirical and numerical correlations developed by other authors.