881 resultados para mapping the current state
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Reservoirs are artificial environments built by humans, and the impacts of these environments are not completely known. Retention time and high nutrient availability in the water increases the eutrophic level. Eutrophication is directly correlated to primary productivity by phytoplankton. These organisms have an important role in the environment. However, high concentrations of determined species can lead to public health problems. Species of cyanobacteria produce toxins that in determined concentrations can cause serious diseases in the liver and nervous system, which could lead to death. Phytoplankton has photoactive pigments that can be used to identify these toxins. Thus, remote sensing data is a viable alternative for mapping these pigments, and consequently, the trophic. Chlorophyll-a (Chl-a) is present in all phytoplankton species. Therefore, the aim of this work was to evaluate the performance of images of the sensor Operational Land Imager (OLI) onboard the Landsat-8 satellite in determining Chl-a concentrations and estimating the trophic level in a tropical reservoir. Empirical models were fitted using data from two field surveys conducted in May and October 2014 (Austral Autumn and Austral Spring, respectively). Models were applied in a temporal series of OLI images from May 2013 to October 2014. The estimated Chl-a concentration was used to classify the trophic level from a trophic state index that adopted the concentration of this pigment-like parameter. The models of Chl-a concentration showed reasonable results, but their performance was likely impaired by the atmospheric correction. Consequently, the trophic level classification also did not obtain better results.
Resumo:
At the first Vertebrate Pest Control Conference in 1964, I traced the history of plague control in California and outlined a revised approach, based on newer concepts of plague ecology. In our state of relative ignorance, this required a number of unproved assumptions about plague occurrence in California that verged on crystal ball gazing. These were principally that (1) plague persists in relatively resistant rodent species in certain favorable locations, (2) ground squirrels and chipmunks experience periodic epizootics, but are not permanent reservoirs, (3) plague "foci" of the past were merely sites of conspicuous epizootics, they did not necessarily correspond to permanent foci, and could result from epizootic migrations over considerable distances, and (4) a number of assumptions about areas of greatest epizootic potential can be made by analyzing the pattern of recurrent plague outbreaks in the past. Since then the validity of these assumptions has been tested by the largest outbreak of plague since the early 1940's. We believe that the results have proved the crystal ball largely correct, resulting in much more precise and efficient epizootic surveillance and deployment of control measures than in the past. The outbreak was for us an administrative emergency that exceeded the capacities of the State Health Department. We greatly appreciated the vital help and cooperation of other agencies and individuals. The U.S, Public Health Service accepted a heavy burden of laboratory testing through its San Francisco Field Station, and provided emergency field personnel. The contributions of State Department of Agriculture, Bureau of Weed and Vertebrate Pest Control; U.S. Parks, Forest Service and Bureau of Land Management; local health and agriculture department; and State Division of Parks personnel were essential in accomplishing control work, as well as epizootic surveillance.
Resumo:
The time is ripe for a comprehensive mission to explore and document Earth's species. This calls for a campaign to educate and inspire the next generation of professional and citizen species explorers, investments in cyber-infrastructure and collections to meet the unique needs of the producers and consumers of taxonomic information, and the formation and coordination of a multi-institutional, international, transdisciplinary community of researchers, scholars and engineers with the shared objective of creating a comprehensive inventory of species and detailed map of the biosphere. We conclude that an ambitious goal to describe 10 million species in less than 50 years is attainable based on the strength of 250 years of progress, worldwide collections, existing experts, technological innovation and collaborative teamwork. Existing digitization projects are overcoming obstacles of the past, facilitating collaboration and mobilizing literature, data, images and specimens through cyber technologies. Charting the biosphere is enormously complex, yet necessary expertise can be found through partnerships with engineers, information scientists, sociologists, ecologists, climate scientists, conservation biologists, industrial project managers and taxon specialists, from agrostologists to zoophytologists. Benefits to society of the proposed mission would be profound, immediate and enduring, from detection of early responses of flora and fauna to climate change to opening access to evolutionary designs for solutions to countless practical problems. The impacts on the biodiversity, environmental and evolutionary sciences would be transformative, from ecosystem models calibrated in detail to comprehensive understanding of the origin and evolution of life over its 3.8 billion year history. The resultant cyber-enabled taxonomy, or cybertaxonomy, would open access to biodiversity data to developing nations, assure access to reliable data about species, and change how scientists and citizens alike access, use and think about biological diversity information.
Resumo:
The ATLAS and CMS collaborations have recently shown data suggesting the presence of a Higgs boson in the vicinity of 125 GeV. We show that a two-Higgs-doublet model spectrum, with the pseudoscalar state being the lightest, could be responsible for the diphoton signal events. In this model, the other scalars are considerably heavier and are not excluded by the current LHC data. If this assumption is correct, future LHC data should show a strengthening of the gamma gamma signal, while the signals in the ZZ(()*()) -> 4l and WW(*()) -> 2l2 nu channels should diminish and eventually disappear, due to the absence of diboson tree-level couplings of the CP-odd state. The heavier CP-even neutral scalars can now decay into channels involving the CP-odd light scalar which, together with their larger masses, allow them to avoid the existing bounds on Higgs searches. We suggest additional signals to confirm this scenario at the LHC, in the decay channels of the heavier scalars into AA and AZ. Finally, this inverted two-Higgs-doublet spectrum is characteristic in models where fermion condensation leads to electroweak symmetry breaking. We show that in these theories it is possible to obtain the observed diphoton signal at or somewhat above the prediction for the standard model Higgs for the typical values of the parameters predicted.
Resumo:
We investigate how the initial geometry of a heavy-ion collision is transformed into final flow observables by solving event-by-event ideal hydrodynamics with realistic fluctuating initial conditions. We study quantitatively to what extent anisotropic flow (nu(n)) is determined by the initial eccentricity epsilon(n) for a set of realistic simulations, and we discuss which definition of epsilon(n) gives the best estimator of nu(n). We find that the common practice of using an r(2) weight in the definition of epsilon(n) in general results in a poorer predictor of nu(n) than when using r(n) weight, for n > 2. We similarly study the importance of additional properties of the initial state. For example, we show that in order to correctly predict nu(4) and nu(5) for noncentral collisions, one must take into account nonlinear terms proportional to epsilon(2)(2) and epsilon(2)epsilon(3), respectively. We find that it makes no difference whether one calculates the eccentricities over a range of rapidity or in a single slice at z = 0, nor is it important whether one uses an energy or entropy density weight. This knowledge will be important for making a more direct link between experimental observables and hydrodynamic initial conditions, the latter being poorly constrained at present.
Resumo:
Abstract Background The Atlantic rainforest ecosystem, where bromeliads are abundant, provides an excellent environment for Kerteszia species, because these anophelines use the axils of those plants as larval habitat. Anopheles (K.) cruzii and Anopheles (K.) bellator are considered the primary vectors of malaria in the Atlantic forest. Although the incidence of malaria has declined in some areas of the Atlantic forest, autochthonous cases are still registered every year, with Anopheles cruzii being considered to be a primary vector of both human and simian Plasmodium. Methods Recent publications that addressed ecological aspects that are important for understanding the involvement of Kerteszia species in the epidemiology of malaria in the Atlantic rainforest in the Neotropical Region were analysed. Conclusion The current state of knowledge about Kerteszia species in relation to the Atlantic rainforest ecosystem was discussed. Emphasis was placed on ecological characteristics related to epidemiological aspects of this group of mosquitoes. The main objective was to investigate biological aspects of the species that should be given priority in future studies.
Resumo:
Máster en Oceanografía
Resumo:
Für das Vermögen der Atmosphäre sich selbst zu reinigen spielen Stickstoffmonoxid (NO) und Stickstoffdioxid (NO2) eine bedeutende Rolle. Diese Spurengase bestimmen die photochemische Produktion von Ozon (O3) und beeinflussen das Vorkommen von Hydroxyl- (OH) und Nitrat-Radikalen (NO3). Wenn tagsüber ausreichend Solarstrahlung und Ozon vorherrschen, stehen NO und NO2 in einem schnellen photochemischen Gleichgewicht, dem „Photostationären Gleichgewichtszustand“ (engl.: photostationary state). Die Summe von NO und NO2 wird deshalb als NOx zusammengefasst. Vorhergehende Studien zum photostationären Gleichgewichtszustand von NOx umfassen Messungen an unterschiedlichsten Orten, angefangen bei Städten (geprägt von starken Luftverschmutzungen), bis hin zu abgeschiedenen Regionen (geprägt von geringeren Luftverschmutzungen). Während der photochemische Kreislauf von NO und NO2 unter Bedingungen erhöhter NOx-Konzentrationen grundlegend verstanden ist, gibt es in ländlicheren und entlegenen Regionen, welche geprägt sind von niedrigeren NOx-Konzetrationen, signifikante Lücken im Verständnis der zugrundeliegenden Zyklierungsprozesse. Diese Lücken könnten durch messtechnische NO2-Interferenzen bedingt sein - insbesondere bei indirekten Nachweismethoden, welche von Artefakten beeinflusst sein können. Bei sehr niedrigen NOx-Konzentrationen und wenn messtechnische NO2-Interferenzen ausgeschlossen werden können, wird häufig geschlussfolgert, dass diese Verständnislücken mit der Existenz eines „unbekannten Oxidationsmittels“ (engl.: unknown oxidant) verknüpft ist. Im Rahmen dieser Arbeit wird der photostationäre Gleichgewichtszustand von NOx analysiert, mit dem Ziel die potenzielle Existenz bislang unbekannter Prozesse zu untersuchen. Ein Gasanalysator für die direkte Messung von atmosphärischem NO¬2 mittels laserinduzierter Fluoreszenzmesstechnik (engl. LIF – laser induced fluorescence), GANDALF, wurde neu entwickelt und während der Messkampagne PARADE 2011 erstmals für Feldmessungen eingesetzt. Die Messungen im Rahmen von PARADE wurden im Sommer 2011 in einem ländlich geprägten Gebiet in Deutschland durchgeführt. Umfangreiche NO2-Messungen unter Verwendung unterschiedlicher Messtechniken (DOAS, CLD und CRD) ermöglichten einen ausführlichen und erfolgreichen Vergleich von GANDALF mit den übrigen NO2-Messtechniken. Weitere relevante Spurengase und meteorologische Parameter wurden gemessen, um den photostationären Zustand von NOx, basierend auf den NO2-Messungen mit GANDALF in dieser Umgebung zu untersuchen. Während PARADE wurden moderate NOx Mischungsverhältnisse an der Messstelle beobachtet (10^2 - 10^4 pptv). Mischungsverhältnisse biogener flüchtige Kohlenwasserstoffverbindungen (BVOC, engl.: biogenic volatile organic compounds) aus dem umgebenden Wald (hauptsächlich Nadelwald) lagen in der Größenordnung 10^2 pptv vor. Die Charakteristiken des photostationären Gleichgewichtszustandes von NOx bei niedrigen NOx-Mischungsverhältnissen (10 - 10^3 pptv) wurde für eine weitere Messstelle in einem borealen Waldgebiet während der Messkampagne HUMPPA-COPEC 2010 untersucht. HUMPPA–COPEC–2010 wurde im Sommer 2010 in der SMEARII-Station in Hyytiälä, Süd-Finnland, durchgeführt. Die charakteristischen Eigenschaften des photostationären Gleichgewichtszustandes von NOx in den beiden Waldgebieten werden in dieser Arbeit verglichen. Des Weiteren ermöglicht der umfangreiche Datensatz - dieser beinhaltet Messungen von relevanten Spurengasen für die Radikalchemie (OH, HO2), sowie der totalen OH-Reaktivität – das aktuelle Verständnis bezüglich der NOx-Photochemie unter Verwendung von einem Boxmodell, in welches die gemessenen Daten als Randbedingungen eingehen, zu überprüfen und zu verbessern. Während NOx-Konzentrationen in HUMPPA-COPEC 2010 niedriger sind, im Vergleich zu PARADE 2011 und BVOC-Konzentrationen höher, sind die Zyklierungsprozesse von NO und NO2 in beiden Fällen grundlegend verstanden. Die Analyse des photostationären Gleichgewichtszustandes von NOx für die beiden stark unterschiedlichen Messstandorte zeigt auf, dass potenziell unbekannte Prozesse in keinem der beiden Fälle vorhanden sind. Die aktuelle Darstellung der NOx-Chemie wurde für HUMPPA-COPEC 2010 unter Verwendung des chemischen Mechanismus MIM3* simuliert. Die Ergebnisse der Simulation sind konsistent mit den Berechnungen basierend auf dem photostationären Gleichgewichtszustand von NOx.
Resumo:
The clinical validity of at-risk criteria of psychosis had been questioned based on epidemiological studies that have reported much higher prevalence and annual incidence rates of psychotic-like experiences (PLEs as assessed by either self rating questionnaires or layperson interviews) in the general population than of the clinical phenotype of psychotic disorders (van Os et al., 2009). Thus, it is unclear whether “current at-risk criteria reflect behaviors so common among adolescents and young adults that a valid distinction between ill and non-ill persons is difficult” (Carpenter, 2009). We therefore assessed the 3-month prevalence of at-risk criteria by means of telephone interviews in a randomly drawn general population sample from the at-risk age segment (age 16–35 years) in the Canton Bern, Switzerland. Eighty-five of 102 subjects had valid phone numbers, 21 of these subjects refused (although 6 of them signaled willingness to participate at a later time), 4 could not be contacted. Sixty subjects (71% of the enrollment fraction) participated. Two participants met exclusion criteria (one for being psychotic, one for lack of language skills). Twenty-two at-risk symptoms were assessed for their prevalence and severity within the 3 months prior to the interview by trained clinical raters using (i) the Structured Interview for Prodromal Syndromes (SIPS; Miller et al., 2002) for the evaluation of 5 attenuated psychotic and 3 brief limited intermittent psychotic symptoms (APS, BLIPS) as well as state-trait criteria of the ultra-high-risk (UHR) criteria and (ii) the Schizophrenia Proneness Instrument, Adult version (SPI-A; Schultze-Lutter et al., 2007) for the evaluation of the 14 basic symptoms included in COPER and COGDIS (Schultze-Lutter et al., 2008). Further, psychiatric axis I diagnoses were assessed by means of the Mini-International Neuropsychiatric Interview, M.I.N.I. (Sheehan et al., 1998), and psychosocial functioning by the Scale of Occupational and Functional Assessment (SOFAS; APA, 1994). All interviewees felt ‘rather’ or ‘very’ comfortable with the interview. Of the 58 included subjects, only 1 (2%) fulfilled APS criteria by reporting the attenuated, non-delusional idea of his mind being literally read by others at a frequency of 2–3 times a week that had newly occurred 6 weeks ago. BLIPS, COPER, COGDIS or state-trait UHR criteria were not reported. Yet, twelve subjects (21%) described sub-threshold at-risk symptoms: 7 (12%) reported APS relevant symptoms but did not meet time/frequency criteria of APS, and 9 (16%) reported COPER and/or COGDIS relevant basic symptoms but at an insufficient frequency or as a trait lacking increase in severity; 4 of these 12 subjects reported both sub-threshold APS and sub-threshold basic symptoms. Table 1 displays type and frequency of the sub-threshold at-risk symptoms.
Resumo:
Systemic lupus erythematosus (SLE) can be a severe and potentially life-threatening disease that often represents a therapeutic challenge because of its heterogeneous organ manifestations. Only glucocorticoids, chloroquine and hydroxychloroquine, azathioprine, cyclophosphamide and very recently belimumab have been approved for SLE therapy in Germany, Austria and Switzerland. Dependence on glucocorticoids and resistance to the approved therapeutic agents, as well as substantial toxicity, are frequent. Therefore, treatment considerations will include 'off-label' use of medication approved for other indications. In this consensus approach, an effort has been undertaken to delineate the limits of the current evidence on therapeutic options for SLE organ disease, and to agree on common practice. This has been based on the best available evidence obtained by a rigorous literature review and the authors' own experience with available drugs derived under very similar health care conditions. Preparation of this consensus document included an initial meeting to agree upon the core agenda, a systematic literature review with subsequent formulation of a consensus and determination of the evidence level followed by collecting the level of agreement from the panel members. In addition to overarching principles, the panel have focused on the treatment of major SLE organ manifestations (lupus nephritis, arthritis, lung disease, neuropsychiatric and haematological manifestations, antiphospholipid syndrome and serositis). This consensus report is intended to support clinicians involved in the care of patients with difficult courses of SLE not responding to standard therapies by providing up-to-date information on the best available evidence.
Resumo:
It is a well-known fact that, in the electrolysis of a CuSO4 solution containing iron sulfate, using insoluble anodes, with the depletion of copper, the point is finally reached where the current efficiency becomes zero. This decrease in current efficiency is due to the oxidation of the ferrous sulfate to the ferric condition at the anode, by the oxygen liberated. The resulting ferric sulfate diffuses over to the cathode and there dissolves copper from the cathode according to the chemical equation Cu + Fe2 (SO4)3 = CuSO4 + 2FeSO4. This copper, which has been deposited at the cathode by the electric current, is thus redissolved by the Fe2(SO4)3. The solution of the copper causes at the same time a formation of FeSO4 which in turn diffuses over to the anode and is there oxidized to Fe2(SO4)3; and so the cycle continues, using electric current without rendering useful work. E. H. Larison has noted that a definite amount of ferric salts must be reduced to the ferrous condition before all the copper will remain on the cathode; he does not state, however, just what this point is. L. Addicks has plotted the relation between current efficiency and ferric sulphate content. The existence of the results scattered the points more or less, although the decrease in current efficiency with increased ferric sulphate content is clearly indicated. E. T.Kern has likewise noted that the smaller the amount of copper in the solution, the greater is the reduction of current efficiency. In this work, therefore, it was desired to determine what amount of ferric iron was permissible in a copper sulfate solution of definite concentration before the current efficiency would drop to zero, and what, if any, was the effect of definite Cu:Fe’’’ratio upon the current efficiency of the electrolysis.
Resumo:
Within the scope of a comprehensive assessment of the degree of soil erosion in Switzerland, common methods have been used in the past including test plot measurements, artificial rainfall simulation, and erosion modelling. In addition, mapping guidelines for all visible erosion features have been developed since the 1970s and are being successfully applied in many research and soil conservation projects. Erosion damage has been continuously mapped over a period of 9 years in a test region in the central Bernese plateau. In 2005, two additional study areas were added. The present paper assesses the data gathered and provides a comparison of the three study areas within a period of one year (from October 2005 to October 2006), focusing on the on-site impacts of soil erosion. During this period, about 11 erosive rainfall events occurred. Average soil loss rates mapped at each study site amounted to 0.7 t ha-1, 1.2 t ha-1 and 2.3 t ha-1, respectively. About one fourth of the total arable land showed visible erosion damage. Maximum soil losses of about 70 t ha-1 occurred on individual farm plots. Average soil erosion patterns are widely used to underline the severity of an erosion problem (e.g. impacts on water bodies). But since severe rainfall events, wheel tracks, headlands, and other “singularities” often cause high erosion rates, analysis of extreme erosion patterns such as maximum values led to a more differentiated understanding and appropriate conclusions for planning and design of soil protection measures. The study contains an assessment of soil erosion in Switzerland, emphasizing questions about extent, frequency and severity. At the same time, the effects of different types of land management are investigated in the field, aiming at the development of meaningful impact indicators of (un-)sustainable agriculture/soil erosion risk as well as the validation of erosion models. The results illustrate that conservation agriculture including no-till, strip tillage and in-mulch seeding plays an essential role in reducing soil loss as compared to conventional tillage.
Resumo:
This article gives an overview of trends in policy on the functions and role of social work in French society over the past twenty years. The author suggests several reasons for the current feeling of crisis of professional identity among professionals and the complexities of the political demands made on social work. These are analysed as a consequence of the specific French context of decentralisation of the State and of the multitude of approaches to organisation and to professional training programs. On a broader level, it seems that French social work reflects many characteristics of “modernity” being concerned with difficulties in defining a clear identity, lack of a clear territorial and institutional base (or “disembeddedness” to borrow Giddens term) and difficulties in finding a clear voice to make its values heard in an increasingly politicised arena of public debate.