899 resultados para Local education system
Resumo:
Latin America is characterized by ethnic, geographical, cultural, and economic diversity; therefore, training in gastroenterology in the region must be considered in this context. The continent's medical education is characterized by a lack of standards and the volume of research continues to be relatively small. There is a multiplicity of events in general gastroenterology and in sub-disciplines, both at regional and local levels, which ensure that many colleagues have access to information. Medical education programs must be based on a clinical vision and be considered in close contact with the patients. The programs should be properly supervised, appropriately defined, and evaluated on a regular basis. The disparity between the patients' needs, the scarce resources available, and the pressures exerted by the health systems on doctors are frequent cited by those complaining of poor professionalism. Teaching development can play a critical role in ensuring the quality of teaching and learning in universities. Continuing professional development programs activities must be planned on the basis of the doctors' needs, with clearly defined objectives and using proper learning methodologies designed for adults. They must be evaluated and accredited by a competent body, so that they may become the basis of a professional regulatory system. The specialty has made progress in the last decades, offering doctors various possibilities for professional development. The world gastroenterology organization has contributed to the speciality through three distinctive, but closely inter-related, programs: Training Centers, Train-the-Trainers, and Global Guidelines, in which Latin America is deeply involved. (C) 2011 Baishideng. All rights reserved.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
The local atomic structures around the Zr atom of pure (undoped) ZrO(2) nanopowders with different average crystallite sizes, ranging from 7 to 40 nm, have been investigated. The nanopowders were synthesized by different wet-chemical routes, but all exhibit the high-temperature tetragonal phase stabilized at room temperature, as established by synchrotron radiation X-ray diffraction. The extended X-ray absorption fine structure (EXAFS) technique was applied to analyze the local structure around the Zr atoms. Several authors have studied this system using the EXAFS technique without obtaining a good agreement between crystallographic and EXAFS data. In this work, it is shown that the local structure of ZrO(2) nanopowders can be described by a model consisting of two oxygen subshells (4 + 4 atoms) with different Zr-O distances, in agreement with those independently determined by X-ray diffraction. However, the EXAFS study shows that the second oxygen subshell exhibits a Debye-Waller (DW) parameter much higher than that of the first oxygen subshell, a result that cannot be explained by the crystallographic model accepted for the tetragonal phase of zirconia-based materials. However, as proposed by other authors, the difference in the DW parameters between the two oxygen subshells around the Zr atoms can be explained by the existence of oxygen displacements perpendicular to the z direction; these mainly affect the second oxygen subshell because of the directional character of the EXAFS DW parameter, in contradiction to the crystallographic value. It is also established that this model is similar to another model having three oxygen subshells, with a 4 + 2 + 2 distribution of atoms, with only one DW parameter for all oxygen subshells. Both models are in good agreement with the crystal structure determined by X-ray diffraction experiments.
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.
Resumo:
We present precise tests of CP and CPT symmetry based on the full data set of K -> pi pi decays collected by the KTeV experiment at Fermi National Accelerator Laboratory during 1996, 1997, and 1999. This data set contains 16 x 10(6) K -> pi(0)pi(0) and 69 x 10(6) K -> pi(+)pi(-) decays. We measure the direct CP violation parameter Re(epsilon'/epsilon) = (19.2 +/- 2.1) x 10(-4). We find the K(L) -> K(S) mass difference Delta m = (5270 +/- 12) x 10(6) (h) over tilde s(-1) and the K(S) lifetime tau(S) = (89.62 +/- 0.05) x 10(-12) s. We also measure several parameters that test CPT invariance. We find the difference between the phase of the indirect CP violation parameter epsilon and the superweak phase: phi(epsilon) - phi(SW) =(0.40 +/- 0.56)degrees. We measure the difference of the relative phases between the CP violating and CP conserving decay amplitudes for K -> pi(+)pi(-) (phi(+-)) and for K -> pi(0)pi(0) (phi(00)): Delta phi = (0.30 +/- 0.35)degrees. From these phase measurements, we place a limit on the mass difference between K(0) and (K) over bar (0): Delta M < 4.8 x 10(-19) GeV/c(2) at 95% C.L. These results are consistent with those of other experiments, our own earlier measurements, and CPT symmetry.
Resumo:
Parity (P)-odd domains, corresponding to nontrivial topological solutions of the QCD vacuum, might be created during relativistic heavy-ion collisions. These domains are predicted to lead to charge separation of quarks along the orbital momentum of the system created in noncentral collisions. To study this effect, we investigate a three-particle mixed-harmonics azimuthal correlator which is a P-even observable, but directly sensitive to the charge-separation effect. We report measurements of this observable using the STAR detector in Au + Au and Cu + Cu collisions at root s(NN) = 200 and 62 GeV. The results are presented as a function of collision centrality, particle separation in rapidity, and particle transverse momentum. A signal consistent with several of the theoretical expectations is detected in all four data sets. We compare our results to the predictions of existing event generators and discuss in detail possible contributions from other effects that are not related to P violation.
Resumo:
Parity-odd domains, corresponding to nontrivial topological solutions of the QCD vacuum, might be created during relativistic heavy-ion collisions. These domains are predicted to lead to charge separation of quarks along the system's orbital momentum axis. We investigate a three-particle azimuthal correlator which is a P even observable, but directly sensitive to the charge separation effect. We report measurements of charged hadrons near center-of-mass rapidity with this observable in Au+Au and Cu+Cu collisions at s(NN)=200 GeV using the STAR detector. A signal consistent with several expectations from the theory is detected. We discuss possible contributions from other effects that are not related to parity violation.
Resumo:
The objective of this paper is two-fold: firstly, we develop a local and global (in time) well-posedness theory for a system describing the motion of two fluids with different densities under capillary-gravity waves in a deep water flow (namely, a Schrodinger-Benjamin-Ono system) for low-regularity initial data in both periodic and continuous cases; secondly, a family of new periodic traveling waves for the Schrodinger-Benjamin-Ono system is given: by fixing a minimal period we obtain, via the implicit function theorem, a smooth branch of periodic solutions bifurcating a Jacobian elliptic function called dnoidal, and, moreover, we prove that all these periodic traveling waves are nonlinearly stable by perturbations with the same wavelength.
Resumo:
This article analyzes the Brazilian political system from the local perspective. Following Cox (1997), we review the problems with electoral coordination that emerge from a given institutional framework. Due to the characteristics of the Brazilian Federal system and its electoral rules, linkage between the three levels of government is not guaranteed a priori, but demands a coordinating effort by the parties' leadership. According to our hypothesis, the parties are capable of coordinating their election strategies at different levels in the party system. Regression models based on two-stage least squares (2SLS) and TOBIT, analyzing a panel of Brazilian municipalities with data from the 1994 and 2000 elections, show that the proportion of votes received by a party in a given election correlates closely with its previous votes in majoritarian elections. Despite institutional incentives, the Brazilian party system shows evidence that it is organized nationally to the extent that it links the competition for votes at the three levels of government (National, State, and Municipal).
Resumo:
The aim of this study is to describe the changes in nursing education during the process prior to and after the establishment of democracy in Spain. It begins with the hypothesis that differences in social and political organization influenced the way the system of nursing education evolved, keeping it in line with neopositivistic schemes and exclusively technical approaches up until the advent of democracy. The evolution of a specific profile for nursing within the educational system has been shaped by the relationship between the systems of social and political organization in Spain. To examine the insertion of subjects such as the anthropology of healthcare into education programs for Spanish nursing, one must consider the cultural, intercultural and transcultural factors that are key to understanding the changes in nursing education that allowed for the adoption of a holistic approach in the curricula. Until the arrival of democracy in 1977, Spanish nursing education was solely technical in nature and the role of nurses was limited to the tasks and procedures defined by the bureaucratic thinking characteristic of the rational-technological paradigm. Consequently, during the long period prior to democracy, nursing in Spain was under the influence of neopositivistic and technical thinking, which had its effect on educational curricula. The addition of humanities and anthropology to the curricula, which facilitated a holistic approach, occurred once nursing became a field of study at the university level in 1977, a period that coincided with the beginnings of democracy in Spain.
Resumo:
Exposure to mercury at nanomolar level affects cardiac function but its effects on vascular reactivity have yet to be investigated. Pressor responses to phenylephrine (PHE) were investigated in perfused rat tail arteries before and after treatment with 6 nM HgCl2 during 1 h,,in the presence (E+) and absence (E-) of endothelium, after L-NAME (10(-4) M), indomethacin (10(-5) M), enalaprilate (1 mu M), tempol (1 mu M) and deferoxamine (300 mu M) treatments. HgCl2 increased sensitivity (pD(2)) without modifying the maximum response (Em) to PHE, but the pD(2) increase was abolished after endothelial damage. L-NAME treatment increased pD(2) and Emax. However, in the presence of HgCl2, this increase was smaller, and it did not modify Emax. After indomethacin treatment, the increase of pD(2) induced by HgCl2 was maintained. Enalaprilate, tempol and deferoxamine reversed the increase of pD(2) evoked by HgCl2. HgCl2 increased the angiotensin converting enzyme (ACE) activity explaining the result obtained with enalaprilate. Results suggest that at nanomolar concentrations HgCl2 increase the vascular reactivity to PHE. This response is endothelium mediated and involves the reduction of NO bioavailability and the action of reactive oxygen species. The local ACE participates in mercury actions and depends on the angiotensin 11 generation. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The recent claim that the exit probability (EP) of a slightly modified version of the Sznadj model is a continuous function of the initial magnetization is questioned. This result has been obtained analytically and confirmed by Monte Carlo simulations, simultaneously and independently by two different groups (EPL, 82 (2008) 18006; 18007). It stands at odds with an earlier result which yielded a step function for the EP (Europhys. Lett., 70 (2005) 705). The dispute is investigated by proving that the continuous shape of the EP is a direct outcome of a mean-field treatment for the analytical result. As such, it is most likely to be caused by finite-size effects in the simulations. The improbable alternative would be a signature of the irrelevance of fluctuations in this system. Indeed, evidence is provided in support of the stepwise shape as going beyond the mean-field level. These findings yield new insight in the physics of one-dimensional systems with respect to the validity of a true equilibrium state when using solely local update rules. The suitability and the significance to perform numerical simulations in those cases is discussed. To conclude, a great deal of caution is required when applying updates rules to describe any system especially social systems. Copyright (C) EPLA, 2011