901 resultados para ñande reko (guaraní way of being)
Resumo:
In this paper, we explore the connection between labor market segmentation in two sectors, a modern protected formal sector and a traditional- unprotected-informal sector, and overeducation in a developing country. Informality is thought to have negative consequences, primarily through poorer working conditions, lack of social security, as well as low levels of productivity throughout the economy. This paper considers an aspect that has not been previously addressed, namely the fact that informality might also affect the way workers match their actual education with that required performing their job. We use micro-data from Colombia to test the relationship between overeducation and informality. Empirical results suggest that, once the endogeneity of employment choice has been accounted for, formal male workers are less likely to be overeducated. Interestingly, the propensity of being overeducated among women does not seem to be closely related to the employment choice.
Resumo:
During more than 20 years organisations like Gesto por la Paz and Lokarri had been trying to change the social approach to violence, instilling values of peace and dialogue. This working paper defends the idea that the work of these two organisations is key to understand the end of ETA violence and the lack of support that political violence has in the Basque Country. It develops the Basque peace frame generated by this movement and explains how this frame is present in the different levels of Basque society, changing the way political collective identities are negotiated in the Basque Country. Ultimately, their effort is to propose another way of doing politics, one where nationalism and violence are not intrinsically united, escaping from the polarization and confrontation that were in place during the 80s-90s.
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
Objective To identify the understanding of the healthcare professionals in relation to the role of complementary therapies in primary health care. Method Systematic review by way of the following information sources: PubMed, CINAHL, PeriEnf, AMED, EMBASE, Web of Science, Psicoinfo and Psicodoc, using the keyword Primary Health Care alone, and associated with the following keywords: Medicinal Plants, Herbal Medicine, Homeopathy, Traditional Chinese Medicine, Acupuncture, Anthroposophical Medicine. Results Twenty-two studies from 1986 to 2011 were included. We identified three styles of practice: conventional medicine, complementary therapies and integrative medicine. Positioning professional practices within these three styles may facilitate discussion of concepts of health care, enhancing the health care provided as a result. Conclusion The work process in primary care presents difficulties for conducting integrative and holistic health care, but this practice has been introduced over time by professionals who integrate conventional medicine and complementary therapies, concerned with the care and well-being of patients.
Resumo:
Correspondence analysis, when used to visualize relationships in a table of counts(for example, abundance data in ecology), has been frequently criticized as being too sensitiveto objects (for example, species) that occur with very low frequency or in very few samples. Inthis statistical report we show that this criticism is generally unfounded. We demonstrate this inseveral data sets by calculating the actual contributions of rare objects to the results ofcorrespondence analysis and canonical correspondence analysis, both to the determination ofthe principal axes and to the chi-square distance. It is a fact that rare objects are oftenpositioned as outliers in correspondence analysis maps, which gives the impression that theyare highly influential, but their low weight offsets their distant positions and reduces their effecton the results. An alternative scaling of the correspondence analysis solution, the contributionbiplot, is proposed as a way of mapping the results in order to avoid the problem of outlying andlow contributing rare objects.
Resumo:
BACKGROUND: The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. METHODOLOGY/PRINCIPAL FINDINGS: In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. CONCLUSIONS: This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
Potilaiden käsitys terveyteen liittyvästä elämänlaadusta lonkan tekonivelleikkauksen jälkeisenä toipumisaikana – kuuden kuukauden seurantatutkimus Tässä kaksivaiheisessa seurantatutkimuksessa tarkasteltiin potilaiden käsitystä terveyteen liittyvästä elämänlaadusta lonkan tekonivelleikkauksen jälkeisenä toipumisaikana. Tutkimuksen ensimmäisessä vaiheessa tarkoituksena oli sekä kuvailla potilaiden kokemuksia potilaana olosta, saamastaan hoidosta ja terveyspalveluorganisaatiosta että analysoida aikaisempien tutkimusten perusteella leikkauksen tuloksia potilaan kannalta. Toisessa vaiheessa tarkoituksena oli arvioida potilaiden kokemaa elämänlaatua leikkauksen jälkeen, ja sitä vaikuttivatko primaaritulokset (fyysinen toimintakyky, kipu, ahdistus) tai taloudelliset seuraukset (potilaiden itsensämaksamat kustannukset, palvelujen käyttö) terveyteen liittyvään elämänlaatuun. Tutkimuksen tavoitteena oli löytää mahdolliset kriittiset ajankohdat tai tekijät, jotka saattavat hidastaa toipumista ja siten huonontaa potilaiden elämänlaatua. Tätä tietoa voidaan käyttää hoitotyössä kun suunnitellaan sopivaa hoitoa ja tukea toipumisajalle. Tutkimuksen ensimmäisessä vaiheessa primaarileikkaukseen tulevat potilaat (n = 17) kuvailivat teemahaastatteluissa kokemuksiaan kahdesti leikkauksen jälkeen. Haastatteluaineisto analysoitiin induktiivisella sisällönanalyysilla. Lisäksi 17 tutkimusartikkelista analysoitiin deduktiivisella sisällönanalyysilla leikkauksen tuloksia potilaalle, tuloksiin vaikuttavia tekijöitä ja käytetyt tutkimusmetodit. Toisessa vaiheessa primaari- tai revisioleikkaukseen tulevat potilaat (n = 100) arvioivat leikkauksen tuloksia kuuden kuukauden ajan leikkauksen jälkeen: terveyteen liittyvää elämänlaatua, primaarituloksia ja taloudellisia seurauksia. Aineisto kerättiin erilaisilla mittareilla: Sickness Impact Profile, Finnish Version, Stait-Trait Anxiety Inventory, ja Numeric Rating Scale. Lisäksi käytettiin tätä tutkimusta varten tehtyjä kyselylomakkeita: Fyysinen toimintakyky-mittari, Palvelujen käyttö-mittari ja Kustannusmittari. Tutkimuksen toiseen vaiheen tulokset analysoitiin tilastollisilla menetelmillä. Potilaiden terveyteen liittyvä elämänlaatu parani ja kipu lievittyi leikkauksen jälkeen ja fyysinen toimintakyky lisääntyi toipumisaikana. Positiivisista muutoksista huolimatta potilaat kokivat ahdistusta samassa määrin kuin ennen leikkaustakin. Palvelujen käyttö vaihteli toipumisajan kuluessa ja potilaiden maksamissa kustannuksissa oli suuria vaihteluita. Fyysisen toimintakyvyn lisääntyminen ja kivun lieveneminen paransivat terveyteen liittyvää elämänlaatua. Sen sijaan huonompi elämänlaatu toipumisaikana oli yhteydessä suurempaan palvelujen käyttöön, kun taas kustannuksilla ei ollut yhteyttä elämänlaatuun. Potilaiden ominaispiirteet tulisi ottaa enemmän huomioon suunniteltaessa sopivaa leikkauksenjälkeistä hoitoa ja tukea. Potilaat tarvitsevat yksilöllisiä ohjeita, sillä monet taustatekijät (esim. ikä, sukupuoli, preoperatiivinen kipu, siviilisääty, ja leikkaustyyppi) vaikuttavat toipumiseen.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
Introduction: Building online courses is a highly time consuming task for teachers of a single university. Universities working alone create high-quality courses but often cannot cover all pathological fields. Moreover this often leads to duplication of contents among universities, representing a big waste of teacher time and energy. We initiated in 2011 a French university network for building mutualized online teaching pathology cases, and this network has been extended in 2012 to Quebec and Switzerland. Method: Twenty French universities (see & for details), University Laval in Quebec and University of Lausanne in Switzerland are associated to this project. One e-learning Moodle platform (http://moodle.sorbonne-paris-cite.fr/) contains texts with URL pointing toward virtual slides that are decentralized in several universities. Each university has the responsibility of its own slide scanning, slide storage and online display with virtual slide viewers. The Moodle website is hosted by PRES Sorbonne Paris Cité, and financial supports for hardware have been obtained from UNF3S (http://www.unf3s.org/) and from PRES Sorbonne Paris Cité. Financial support for international fellowships has been obtained from CFQCU (http://www.cfqcu.org/). Results: The Moodle interface has been explained to pathology teachers using web-based conferences with screen sharing. The teachers added then contents such as clinical cases, selfevaluations and other media organized in several sections by student levels and pathological fields. Contents can be used as online learning or online preparation of subsequent courses in classrooms. In autumn 2013, one resident from Quebec spent 6 weeks in France and Switzerland and created original contents in inflammatory skin pathology. These contents are currently being validated by senior teachers and will be opened to pathology residents in spring 2014. All contents of the website can be accessed for free. Most contents just require anonymous connection but some specific fields, especially those containing pictures obtained from patients who agreed for a teaching use only, require personal identification of the students. Also, students have to register to access Moodle tests. All contents are written in French but one case has been translated into English to illustrate this communication (http://moodle.sorbonne-pariscite.fr/mod/page/view.php?id=261) (use "login as a guest"). The Moodle test module allows many types of shared questions, making it easy to create personalized tests. Contents that are opened to students have been validated by an editorial committee composed of colleagues from the participating institutions. Conclusions: Future developments include other international fellowships, the next one being scheduled for one French resident from May to October 2014 in Quebec, with a study program centered on lung and breast pathology. It must be kept in mind that these e-learning programs highly depend on teachers' time, not only at these early steps but also later to update the contents. We believe that funding resident fellowships for developing online pathological teaching contents is a win-win situation, highly beneficial for the resident who will improve his knowledge and way of thinking, highly beneficial for the teachers who will less worry about access rights or image formats, and finally highly beneficial for the students who will get courses fully adapted to their practice.
Resumo:
Background: The use of emergency hospital services (EHS) has increased steadily in Spain in the last decade while the number of immigrants has increased dramatically. Studies show that immigrants use EHS differently than native-born individuals, and this work investigates demographics, diagnoses and utilization rates of EHS in Lleida (Spain). Methods: Cross-sectional study of all the 96,916 EHS visits by patients 15 to 64 years old, attended during the years 2004 and 2005 in a public teaching hospital. Demographic data, diagnoses of the EHS visits, frequency of hospital admissions, mortality and diagnoses at hospital discharge were obtained. Utilization rates were estimated by group of origin. Poisson regression was used to estimate the rate ratios of being visited in the EHS with respect to the Spanish-born population. Results: Immigrants from low-income countries use EHS services more than the Spanish-born population. Differences in utilization patterns are particularly marked for Maghrebi men and women and sub-Saharan women. Immigrant males are at lower risk of being admitted to the hospital, as compared with Spanish-born males. On the other hand, immigrant women are at higher risk of being admitted. After excluding the visits with gynecologic and obstetric diagnoses, women from sub-Saharan Africa and the Maghreb are still at a higher risk of being admitted than their Spanish-born counterparts. Conclusion: In Lleida (Spain), immigrants use more EHS than the Spanish born population. Future research should indicate whether the same pattern is found in other areas of Spain and whether EHS use is attributable to health needs, barriers to access to the primary care services or similarities in the way immigrants access health care in their countries of origin.
Resumo:
The management and conservation of coastal waters in the Baltic is challenged by a number of complex environmental problems, including eutrophication and habitat degradation. Demands for a more holistic, integrated and adaptive framework of ecosystem-based management emphasize the importance of appropriate information on the status and changes of the aquatic ecosystems. The thesis focuses on the spatiotemporal aspects of environmental monitoring in the extensive and geomorphologically complex coastal region of SW Finland, where the acquisition of spatially and temporally representative monitoring data is inherently challenging. Furthermore, the region is subject to multiple human interests and uses. A holistic geographical approach is emphasized, as it is ultimately the physical conditions that set the frame for any human activity. Characteristics of the coastal environment were examined using water quality data from the database of the Finnish environmental administration and Landsat TM/ETM+ images. A basic feature of the complex aquatic environment in the Archipelago Sea is its high spatial and temporal variability; this foregrounds the importance of geographical information as a basis of environmental assessments. While evidence of a consistent water turbidity pattern was observed, the coastal hydrodynamic realm is also characterized by high spatial and temporal variability. It is therefore also crucial to consider the spatial and temporal representativeness of field monitoring data. Remote sensing may facilitate evaluation of hydrodynamic conditions in the coastal region and the spatial extrapolation of in situ data despite their restrictions. Additionally, remotely sensed images can be used in the mapping of many of those coastal habitats that need to be considered in environmental management. With regard to surface water monitoring, only a small fraction of the currently available data stored in the Hertta-PIVET register can be used effectively in scientific studies and environmental assessments. Long-term consistent data collection from established sampling stations should be emphasized but research-type seasonal assessments producing abundant data should also be encouraged. Thus a more comprehensive coordination of field work efforts is called for. The integration of remote sensing and various field measurement techniques would be especially useful in the complex coastal waters. The integration and development of monitoring system in Finnish coastal areas also requires further scientific assesement of monitoring practices. A holistic approach to the gathering and management of environmental monitoring data could be a cost-effective way of serving a multitude of information needs, and would fit the holistic, ecosystem-based management regimes that are currently being strongly promoted in Europe.
Resumo:
This study describes unpublished research on improving the solubility of benznidazole by the formation of an inclusion complex. The cyclodextrins selected were αCD, βCD, γCD, HPβCD, RMβCD and SBβCD. All complexes were obtained in solution, presenting 1:1 stoichiometry according to the phase solubility diagram. The highest association constants were obtained with RMβCD and SBβCD, being selected for attainment of solid state complexes. These were characterized using XRD, SEM and dissolution test. The data obtained suggest the formation of complexes and indicate that these may provide a promising alternative way of developing solid doses of drug with suitable biopharmaceutical properties.
Resumo:
PIXE (Particle Induce X-ray Emission spectrometry) was used for analysing stem bark and stem wood of Scots pine, Norway spruce and Silver birch. Thick samples were irradiated, in laboratory atmosphere, with 3 MeV protons and the beam current was measured indirectly using a photo multiplicator (PM) tube. Both point scans and bulk analyses were performed with the 1 mm diameter proton beam. In bulk analyses, whole bark and sectors of discs of the stem wood were dry ashed at 550 ˚C. The ashes were homogenised by shaking and prepared to target pellets for PIXE analyses. This procedure generated representative samples to be analysed, but the enrichment also enabled quantification of some additional trace elements. The ash contents obtained as a product of the sample preparation procedure also showed to be of great importance in the evaluation of results in environmental studies. Spot scans from the pith of pine wood outwards, showed clearly highest concentrations of manganese, calcium and zinc in the first spot irradiated, or 2-3 times higher than in the surrounding wood. For stem wood from the crown part of a pine this higher concentration level was found in the first four spots/mms, including the pith and the two following growth rings. Zinc showed increasing concentrations outwards in sapwood of the pine stem, with the over-all lowest concentrations in the inner half of the sapwood. This could indicate emigration of this element from sapwood being under transformation to heartwood. Point scans across sapwood of pine and spruce showed more distinct variations in concentrations relative to hearth wood. Higher concentrations of e.g. zinc, calcium and manganese were found in earlywood than in denser latewood. Very high concentrations of iron and copper were also seen for some earlywood increments. The ash content of stem bark is up to and order higher than for the stem wood. However, when the elemental concentration in ashes of bark and wood of the same disc were compared, these are very similar – this when trees are growing at spots with no anthropogenic contamination from the atmosphere. The largest difference was obtained for calcium which appeared at two times high concentrations in ashes of bark than in ashes of the wood (ratio of 2). Pine bark is often used in monitoring of atmospheric pollution, where concentrations in bark samples are compared. Here an alternative approach is suggested: Bark and the underlying stem wood of a pine trees are dry ashed and analysed. The elemental concentration in the bark ash is then compared to the concentration of the same element in the wood ash. Comparing bark to wood includes a normalisation for the varying availability of an element from the soil at different sites. When this comparison is done for the ashes of the materials, a normalisation is also obtained for the general and locally different enrichment of inorganic elements from wood to bark. Already a ratio >2 between the concentration in the bark ash and the concentration in the wood ash could indicate atmospheric pollution. For monitoring where bark is used, this way of “inwards” comparison is suggested - instead of comparing to results from analyses of bark from other trees (read reference areas), growing at sites with different soil and, locally, different climate conditions. This approach also enables evaluation of atmospheric pollution from sampling of only relative few individual trees –preferable during forest felling.
Resumo:
The aim of this study is to analyse the content of the interdisciplinary conversations in Göttingen between 1949 and 1961. The task is to compare models for describing reality presented by quantum physicists and theologians. Descriptions of reality indifferent disciplines are conditioned by the development of the concept of reality in philosophy, physics and theology. Our basic problem is stated in the question: How is it possible for the intramental image to match the external object?Cartesian knowledge presupposes clear and distinct ideas in the mind prior to observation resulting in a true correspondence between the observed object and the cogitative observing subject. The Kantian synthesis between rationalism and empiricism emphasises an extended character of representation. The human mind is not a passive receiver of external information, but is actively construing intramental representations of external reality in the epistemological process. Heidegger's aim was to reach a more primordial mode of understanding reality than what is possible in the Cartesian Subject-Object distinction. In Heidegger's philosophy, ontology as being-in-the-world is prior to knowledge concerning being. Ontology can be grasped only in the totality of being (Dasein), not only as an object of reflection and perception. According to Bohr, quantum mechanics introduces an irreducible loss in representation, which classically understood is a deficiency in knowledge. The conflicting aspects (particle and wave pictures) in our comprehension of physical reality, cannot be completely accommodated into an entire and coherent model of reality. What Bohr rejects is not realism, but the classical Einsteinian version of it. By the use of complementary descriptions, Bohr tries to save a fundamentally realistic position. The fundamental question in Barthian theology is the problem of God as an object of theological discourse. Dialectics is Barth¿s way to express knowledge of God avoiding a speculative theology and a human-centred religious self-consciousness. In Barthian theology, the human capacity for knowledge, independently of revelation, is insufficient to comprehend the being of God. Our knowledge of God is real knowledge in revelation and our words are made to correspond with the divine reality in an analogy of faith. The point of the Bultmannian demythologising programme was to claim the real existence of God beyond our faculties. We cannot simply define God as a human ideal of existence or a focus of values. The theological programme of Bultmann emphasised the notion that we can talk meaningfully of God only insofar as we have existential experience of his intervention. Common to all these twentieth century philosophical, physical and theological positions, is a form of anti-Cartesianism. Consequently, in regard to their epistemology, they can be labelled antirealist. This common insight also made it possible to find a common meeting point between the different disciplines. In this study, the different standpoints from all three areas and the conversations in Göttingen are analysed in the frameworkof realism/antirealism. One of the first tasks in the Göttingen conversations was to analyse the nature of the likeness between the complementary structures inquantum physics introduced by Niels Bohr and the dialectical forms in the Barthian doctrine of God. The reaction against epistemological Cartesianism, metaphysics of substance and deterministic description of reality was the common point of departure for theologians and physicists in the Göttingen discussions. In his complementarity, Bohr anticipated the crossing of traditional epistemic boundaries and the generalisation of epistemological strategies by introducing interpretative procedures across various disciplines.