943 resultados para Re-making


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Letras - FCLAS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of stones to crack open encapsulated fruit is widespread among wild bearded capuchin monkeys (Cebus libidinosus) inhabiting savanna-like environments. Some populations in Serra da Capivara National Park (Piaui, Brazil), though, exhibit a seemingly broader toolkit, using wooden sticks as probes, and employing stone tools for a variety of purposes. Over the course of 701.5 hr of visual contact of two wild capuchin groups we recorded 677 tool use episodes. Five hundred and seventeen of these involved the use of stones, and 160 involved the use of sticks (or other plant parts) as probes to access water, arthropods, or the contents of insects` nests. Stones were mostly used as ""hammers""-not only to open fruit or seeds, or smash other food items, but also to break dead wood, conglomerate rock, or cement in search of arthropods, to dislodge bigger stones, and to pulverize embedded quartz pebbles (licking, sniffing, or rubbing the body with the powder produced). Stones also were used in a ""hammer-like"" fashion to loosen the soil for digging out roots and arthropods, and sometimes as ""hoes"" to pull the loosened soil. In a few cases, we observed the re-utilization of stone tools for different purposes (N = 3), or the combined use of two tools-stones and sticks (N = 4) or two stones (N = 5), as sequential or associative tools. On three occasions, the monkeys used smaller stones to loosen bigger quartz pebbles embedded in conglomerate rock, which were subsequently used as tools. These could be considered the first reports of secondary tool use by wild capuchin monkeys. Am. J. Primatol. 71:242-251, 2009. (c) 2008 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basalts of the Parana continental flood basalt (PCFB) province erupted through dominantly Proterozoic continental crust during the Cretaceous. In order to examine the mantle source(s) of this major flood basalt province, we studied Os, Sr, Nd, and Pb isotope systematics, and highly siderophile element (HSE) abundances in tholeiitic basalts that were carefully chosen to show the minimal effects of crustal contamination. These basalts define a precise Re-Os isochron with an age of 131.6 +/- 2.3 Ma and an initial Os-187/Os-188 of 0.1295 +/- 0.0018 (gamma Os-187 = +2.7 +/- 1.4). This initial Os isotopic composition is considerably more radiogenic than estimates of the contemporary Depleted Mantle (DM). The fact that the Re-Os data define a well constrained isochron with an age similar to Ar-40/Ar-39 age determinations, despite generally low Os concentrations, is consistent with closed-system behavior for the HSE. Neodymium, Sr, and Pb isotopic data suggest that the mantle source of the basalts had been variably hybridized by melts derived from enriched mantle components. To account for the combined Os, Nd, Sr, and Pb isotopic characteristics of these rocks, we propose that the primary melts formed from metasomatized asthenospheric mantle (represented by arc-mantle peridotite) that underwent mixing with two enriched components, EM-I and EM-II. The different enriched components are reflected in minor isotopic differences between basalts from southern and northern portions of the province. The Tristan da Cunha hotspot has been previously suggested to be the cause of the Parana continental flood basalt magmatism. However, present-day Tristan da Cunha lavas have much higher Os-187/Os-188 isotopic compositions than the source of the PCFB. These data, together with other isotopic and elemental data, preclude making a definitive linkage between the Tristan plume and the PCFB. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The studies in the present thesis focus on post-decision processes using the theoretical framework of Differentiation and Consolidation Theory. This thesis consists of three studies. In all these studies, pre-decision evaluations are compared with post-decision evaluations in order to explore differences in evaluations of decision alternatives before and after a decision. The main aim of the studies was to describe and gain a clearer and better understanding of how people re-evaluate information, following a decision for which they have experienced the decision and outcome. The studies examine how the attractiveness evaluations of important attributes are restructured from the pre-decision to the post-decision phase; particularly restructuring processes of value conflicts. Value conflict attributes are those in which information speaks against the chosen alternative in a decision. The first study investigates an important real-life decision and illustrates different post-decision (consolidation) processes following the decision. The second study tests whether decisions with value conflicts follow the same consolidation (post-decision restructuring) processes when the conflict is controlled experimentally, as in earlier studies of less controlled real-life decisions. The third study investigates consolidation and value conflicts in decisions in which the consequences are controlled and of different magnitudes. The studies in the present thesis have shown how attractiveness restructuring of attributes in conflict occurs in the post-decision phase. Results from the three studies indicated that attractiveness restructuring of attributes in conflict was stronger for important real-life decisions (Study 1) and in situations in which real consequences followed a decision (Study 3) than in more controlled, hypothetical decision situations (Study 2). Finally, some proposals for future research are suggested, including studies of the effects of outcomes and consequences on consolidation of prior decisions and how a decision maker’s involvement affects his or her pre- and post-decision processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The automatic extraction of biometric descriptors of anonymous people is a challenging scenario in camera networks. This task is typically accomplished making use of visual information. Calibrated RGBD sensors make possible the extraction of point cloud information. We present a novel approach for people semantic description and re-identification using the individual point cloud information. The proposal combines the use of simple geometric features with point cloud features based on surface normals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software repositories have been getting a lot of attention from researchers in recent years. In order to analyze software repositories, it is necessary to first extract raw data from the version control and problem tracking systems. This poses two challenges: (1) extraction requires a non-trivial effort, and (2) the results depend on the heuristics used during extraction. These challenges burden researchers that are new to the community and make it difficult to benchmark software repository mining since it is almost impossible to reproduce experiments done by another team. In this paper we present the TA-RE corpus. TA-RE collects extracted data from software repositories in order to build a collection of projects that will simplify extraction process. Additionally the collection can be used for benchmarking. As the first step we propose an exchange language capable of making sharing and reusing data as simple as possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most recently discussion about the optimal treatment for different subsets of patients suffering from coronary artery disease has re-emerged, mainly because of the uncertainty caused by doctors and patients regarding the phenomenon of unpredictable early and late stent thrombosis. Surgical revascularization using multiple arterial bypass grafts has repeatedly proven its superiority compared to percutaneous intervention techniques, especially in patients suffering from left main stem disease and coronary 3-vessels disease. Several prospective randomized multicenter studies comparing early and mid-term results following PCI and CABG have been really restrictive, with respect to patient enrollment, with less than 5% of all patients treated during the same time period been enrolled. Coronary artery bypass grafting allows the most complete revascularization in one session, because all target coronary vessels larger than 1 mm can be bypassed in their distal segments. Once the patient has been turn-off for surgery, surgeons have to consider the most complete arterial revascularization in order to decrease the long-term necessity for re-revascularization; for instance patency rate of the left internal thoracic artery grafted to the distal part left anterior descending artery may be as high as 90-95% after 10 to 15 years. Early mortality following isolated CABG operation has been as low as 0.6 to 1% in the most recent period (reports from the University Hospital Berne and the University Hospital of Zurich); beside these excellent results, the CABG option seems to be less expensive than PCI with time, since the necessity for additional PCI is rather high following initial PCI, and the price of stent devices is still very high, particularly in Switzerland. Patients, insurance and experts in health care should be better and more honestly informed concerning the risk and costs of PCI and CABG procedures as well as about the much higher rate of subsequent interventions following PCI. Team approach for all patients in whom both options could be offered seems mandatory to avoid unbalanced information of the patients. Looking at the recent developments in transcatheter valve treatments, the revival of cardiological-cardiosurgical conferences seems to a good option to optimize the cooperation between the two medical specialties: cardiology and cardiac surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zentrale Lernmanagement-Plattformen sind mittlerweile an vielen Hochschulen Standard. Damit diese Plattformen nachhaltig genutzt werden, müssen bei der Bewertung die vielfältigen Interessen von Lehrenden, Studierenden, zentralen Einrichtungen bis hin zur Hochschulleitung berücksichtigt werden. Dies gilt sowohl für die Evaluationsprozesse zur Einführung von Lernplattformen, wie auch für Re-Evaluationsprozesse, die notwendig sind, um die Infrastruktur einer Hochschule den sich verändernden Bedürfnissen und Rahmenbedingungen anpassen zu können. An der Universität Trier wurde bzw. werden (Re-)Evaluationsverfahren durchgeführt, bei denen systematisch alle Stakeholder der Hochschule einbezogen werden. Grundlage dafür ist ein Netzwerk aller E-Learning-Support- und Entwicklungseinrichtungen der Universität, das im Rahmen eines Projektes zur E-Learning-Integration etabliert wurde. Der Artikel stellt als Fallstudie die Konzepte für die Evaluations- und Re-Evaluationsprozesse an der Universität Trier vor. Dabei wird weniger auf das Verfahren selbst hinsichtlich der Kriterienwahl und Bewertung sowie den Ergebnissen fokussiert, sondern vielmehr auf Rollen und Aufgaben der Akteure in diesen Entscheidungsprozessen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article seeks a re-conceptualization of the global digital divide debate. It critically explores the predominant notion, its evolution and measurement, as well as the policies that have been advanced to bridge the digital divide. Acknowledging the complexity of this inequality, the article aims at analyzing the disparities beyond the connectivity and skills barriers. Without understating the first two digital divides, it is argued that as the Internet becomes more sophisticated and more integrated into economic, social, and cultural processes, a “third” generation of divides becomes critical. These divides are drawn not at the entry to the net but within the net itself, and limit access to content. The increasing barriers to content, though of a diverse nature, all relate to some governance characteristics inherent in cyberspace, such as global spillover of local decisions, regulation through code, and proliferation of self- and co-regulatory models. It is maintained that as the practice of intervention intensifies in cyberspace, multiple and far-reaching points of control outside formal legal institutions are created, threatening the availabil- ity of public goods and making the pursuit of public objectives difficult. This is an aspect that is rarely ad- dressed in the global digital divide discussions, even in comprehensive analyses and political initiatives such as the World Summit on the Information Society. Yet, the conceptualization of the digital divide as impeded access to content may be key in terms of ensuring real participation and catering for the longterm implications of digital technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UNESCO Convention on cultural diversity marks a wilful separation between the issues of trade and culture on the international level. The present article explores this intensified institutional, policy- and decision-making disconnect and exposes its flaws and the considerable drawbacks it brings with it. These drawbacks, the article argues, become particularly pronounced in the digital media environment that has impacted upon both the conditions of trade with cultural products and services and upon the diversity of cultural expressions in local and global contexts. Criticising the strong and now increasingly meaningless path dependencies of the analogue age, the article sketches some possible ways to reconciling trade and culture, most of which lead back to the WTO, rather than to UNESCO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article seeks a re-conceptualization of the global digital divide debate. It critically explores the predominant notion, its evolution and measurement, as well as the policies that have been advanced to bridge the digital divide. Acknowledging the complexity of this inequality, the article aims at analyzing the disparities beyond the connectivity and skills barriers. Without understating the first two digital divides, it is argued that as the Internet becomes more sophisticated and more integrated into economic, social, and cultural processes, a “third” generation of divides becomes critical. These divides are drawn not at the entry to the net but within the net itself, and limit access to content. The increasing barriers to content, though of a diverse nature, all relate to some governance characteristics inherent in cyberspace, such as global spillover of local decisions, regulation through code, and proliferation of self- and co-regulatory models. It is maintained that as the practice of intervention intensifies in cyberspace, multiple and far-reaching points of control outside formal legal institutions are created, threatening the availability of public goods and making the pursuit of public objectives difficult. This is an aspect that is rarely addressed in the global digital divide discussions, even in comprehensive analyses and political initiatives such as the World Summit on the Information Society. Yet, the conceptualization of the digital divide as impeded access to content may be key in terms of ensuring real participation and catering for the long-term implications of digital technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En el siguiente artículo pretendemos dar cuenta de algunos aspectos que contribuyen a pensar la forma de constitución de un campo pedagógico. Proponemos hacer esto mediante un recorrido que permita explicitar las diferentes formas de relación entre dos disciplinas que toman a la educación como objeto: la sociología y la pedagogía. Para esto vamos a realizar un análisis histórico que va a tomar como referencia tres momentos en el desarrollo de las investigaciones en educación a lo largo del siglo XX. En cada uno de estos momentos podremos ver formas de articulación diferente entre las disciplinas. Posteriormente, la idea es avanzar en el análisis de cómo se produjo este proceso en el caso del Uruguay, analizando finalmente cómo esta articulación se trasunta en el diseño de dos políticas educativas: Las Escuelas de Tiempo Completo (ETC) y el Programa de Maestros Comunitarios (PMC).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En el siguiente artículo pretendemos dar cuenta de algunos aspectos que contribuyen a pensar la forma de constitución de un campo pedagógico. Proponemos hacer esto mediante un recorrido que permita explicitar las diferentes formas de relación entre dos disciplinas que toman a la educación como objeto: la sociología y la pedagogía. Para esto vamos a realizar un análisis histórico que va a tomar como referencia tres momentos en el desarrollo de las investigaciones en educación a lo largo del siglo XX. En cada uno de estos momentos podremos ver formas de articulación diferente entre las disciplinas. Posteriormente, la idea es avanzar en el análisis de cómo se produjo este proceso en el caso del Uruguay, analizando finalmente cómo esta articulación se trasunta en el diseño de dos políticas educativas: Las Escuelas de Tiempo Completo (ETC) y el Programa de Maestros Comunitarios (PMC).