918 resultados para Project reporting tools
Resumo:
Objective: We sought to determine whether a reported history of childhood adversity is associated with components of the National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATP-III)-defined metabolic syndrome in adults with mood disorders. Method: This was a cross-sectional analysis of adult outpatients (N = 373; n = 230 female, n = 143 male; mean age [SD] = 42.86 [14.43]) from the International Mood Disorders Collaborative Project (University of Toronto and Cleveland Clinic) with DSM-IV-defined major depressive disorder and bipolar I/II disorder. Childhood adversity was measured with the Klein Trauma & Abuse-Neglect self-report scale. The groups with and without childhood adversity were compared to determine possible differences in the rates of metabolic syndrome and its components. Logistic and linear regressions adjusted for age, sex, education, employment status, and smoking were used to evaluate the association between childhood adversity and components of metabolic syndrome. Results: For the full sample, 83 subjects (22.25%) met criteria for metabolic syndrome. Individuals reporting a history of any childhood adversity had higher systolic and diastolic blood pressure (systolic: p = 0.040; diastolic: p = 0.038). Among subjects with a history of sexual abuse, a significant proportion met criteria for obesity (45.28% vs. 32.88%; p = 0.010); a trend toward overweight was found for subjects with a history of physical abuse (76.32% vs. 63.33%; p = 0.074), although this relationship did not remain significant after adjusting for potential confounders. There was no statistically significant difference in the overall rate of dyslipidemia and/or metabolic syndrome between subjects with and without childhood adversity. Conclusion: The results herein provide preliminary evidence suggesting that childhood adversity is associated with metabolic syndrome components in individuals with mood disorders. Int'l. J. Psychiatry in Medicine 2012;43:165-177)
Resumo:
The present study is part of the EU Integrated Project “GEHA – Genetics of Healthy Aging” (Franceschi C et al., Ann N Y Acad Sci. 1100: 21-45, 2007), whose aim is to identify genes involved in healthy aging and longevity, which allow individuals to survive to advanced age in good cognitive and physical function and in the absence of major age-related diseases. Aims The major aims of this thesis were the following: 1. to outline the recruitment procedure of 90+ Italian siblings performed by the recruiting units of the University of Bologna (UNIBO) and Rome (ISS). The procedures related to the following items necessary to perform the study were described and commented: identification of the eligible area for recruitment, demographic aspects related to the need of getting census lists of 90+siblings, mail and phone contact with 90+ subjects and their families, bioethics aspects of the whole procedure, standardization of the recruitment methodology and set-up of a detailed flow chart to be followed by the European recruitment centres (obtainment of the informed consent form, anonimization of data by using a special code, how to perform the interview, how to collect the blood, how to enter data in the GEHA Phenotypic Data Base hosted at Odense). 2. to provide an overview of the phenotypic characteristics of 90+ Italian siblings recruited by the recruiting units of the University of Bologna (UNIBO) and Rome (ISS). The following items were addressed: socio-demographic characteristics, health status, cognitive assessment, physical conditions (handgrip strength test, chair-stand test, physical ability including ADL, vision and hearing ability, movement ability and doing light housework), life-style information (smoking and drinking habits) and subjective well-being (attitude towards life). Moreover, haematological parameters collected in the 90+ sibpairs as optional parameters by the Bologna and Rome recruiting units were used for a more comprehensive evaluation of the results obtained using the above mentioned phenotypic characteristics reported in the GEHA questionnaire. 3. to assess 90+ Italian siblings as far as their health/functional status is concerned on the basis of three classification methods proposed in previous studies on centenarians, which are based on: • actual functional capabilities (ADL, SMMSE, visual and hearing abilities) (Gondo et al., J Gerontol. 61A (3): 305-310, 2006); • actual functional capabilities and morbidity (ADL, ability to walk, SMMSE, presence of cancer, ictus, renal failure, anaemia, and liver diseases) (Franceschi et al., Aging Clin Exp Res, 12:77-84, 2000); • retrospectively collected data about past history of morbidity and age of disease onset (hypertension, heart disease, diabetes, stroke, cancer, osteopororis, neurological diseases, chronic obstructive pulmonary disease and ocular diseases) (Evert et al., J Gerontol A Biol Sci Med Sci. 58A (3): 232-237, 2003). Firstly these available models to define the health status of long-living subjects were applied to the sample and, since the classifications by Gondo and Franceschi are both based on the present functional status, they were compared in order to better recognize the healthy aging phenotype and to identify the best group of 90+ subjects out of the entire studied population. 4. to investigate the concordance of health and functional status among 90+ siblings in order to divide sibpairs in three categories: the best (both sibs are in good shape), the worst (both sibs are in bad shape) and an intermediate group (one sib is in good shape and the other is in bad shape). Moreover, the evaluation wanted to discover which variables are concordant among siblings; thus, concordant variables could be considered as familiar variables (determined by the environment or by genetics). 5. to perform a survival analysis by using mortality data at 1st January 2009 from the follow-up as the main outcome and selected functional and clinical parameters as explanatory variables. Methods A total of 765 90+ Italian subjects recruited by UNIBO (549 90+ siblings, belonging to 258 families) and ISS (216 90+ siblings, belonging to 106 families) recruiting units are included in the analysis. Each subject was interviewed according to a standardized questionnaire, comprising extensively utilized questions that have been validated in previous European studies on elderly subjects and covering demographic information, life style, living conditions, cognitive status (SMMSE), mood, health status and anthropometric measurements. Moreover, subjects were asked to perform some physical tests (Hand Grip Strength test and Chair Standing test) and a sample of about 24 mL of blood was collected and then processed according to a common protocol for the preparation and storage of DNA aliquots. Results From the analysis the main findings are the following: - a standardized protocol to assess cognitive status, physical performances and health status of European nonagenarian subjects was set up, in respect to ethical requirements, and it is available as a reference for other studies in this field; - GEHA families are enriched in long-living members and extreme survival, and represent an appropriate model for the identification of genes involved in healthy aging and longevity; - two simplified sets of criteria to classify 90+ sibling according to their health status were proposed, as operational tools for distinguishing healthy from non healthy subjects; - cognitive and functional parameters have a major role in categorizing 90+ siblings for the health status; - parameters such as education and good physical abilities (500 metres walking ability, going up and down the stairs ability, high scores at hand grip and chair stand tests) are associated with a good health status (defined as “cognitive unimpairment and absence of disability”); - male nonagenarians show a more homogeneous phenotype than females, and, though far fewer in number, tend to be healthier than females; - in males the good health status is not protective for survival, confirming the male-female health survival paradox; - survival after age 90 was dependent mainly on intact cognitive status and absence of functional disabilities; - haemoglobin and creatinine levels are both associated with longevity; - the most concordant items among 90+ siblings are related to the functional status, indicating that they contain a familiar component. It is still to be investigated at what level this familiar component is determined by genetics or by environment or by the interaction between genetics, environment and chance (and at what level). Conclusions In conclusion, we could state that this study, in accordance with the main objectives of the whole GEHA project, represents one of the first attempt to identify the biological and non biological determinants of successful/unsuccessful aging and longevity. Here, the analysis was performed on 90+ siblings recruited in Northern and Central Italy and it could be used as a reference for others studies in this field on Italian population. Moreover, it contributed to the definition of “successful” and “unsuccessful” aging and categorising a very large cohort of our most elderly subjects into “successful” and “unsuccessful” groups provided an unrivalled opportunity to detect some of the basic genetic/molecular mechanisms which underpin good health as opposed to chronic disability. Discoveries in the topic of the biological determinants of healthy aging represent a real possibility to identify new markers to be utilized for the identification of subgroups of old European citizens having a higher risk to develop age-related diseases and disabilities and to direct major preventive medicine strategies for the new epidemic of chronic disease in the 21st century.
Resumo:
The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.
Resumo:
In the last years, the importance of locating people and objects and communicating with them in real time has become a common occurrence in every day life. Nowadays, the state of the art of location systems for indoor environments has not a dominant technology as instead occurs in location systems for outdoor environments, where GPS is the dominant technology. In fact, each location technology for indoor environments presents a set of features that do not allow their use in the overall application scenarios, but due its characteristics, it can well coexist with other similar technologies, without being dominant and more adopted than the others indoor location systems. In this context, the European project SELECT studies the opportunity of collecting all these different features in an innovative system which can be used in a large number of application scenarios. The goal of this project is to realize a wireless system, where a network of fixed readers able to query one or more tags attached to objects to be located. The SELECT consortium is composed of European institutions and companies, including Datalogic S.p.A. and CNIT, which deal with software and firmware development of the baseband receiving section of the readers, whose function is to acquire and process the information received from generic tagged objects. Since the SELECT project has an highly innovative content, one of the key stages of the system design is represented by the debug phase. This work aims to study and develop tools and techniques that allow to perform the debug phase of the firmware of the baseband receiving section of the readers.
Resumo:
The research presented in my PhD thesis is part of a wider European project, FishPopTrace, focused on traceability of fish populations and products. My work was aimed at developing and analyzing novel genetic tools for a widely distributed marine fish species, the European hake (Merluccius merluccius), in order to investigate population genetic structure and explore potential applications to traceability scenarios. A total of 395 SNPs (Single Nucleotide Polymorphisms) were discovered from a massive collection of Expressed Sequence Tags, obtained by high-throughput sequencing, and validated on 19 geographic samples from Atlantic and Mediterranean. Genome-scan approaches were applied to identify polymorphisms on genes potentially under divergent selection (outlier SNPs), showing higher genetic differentiation among populations respect to the average observed across loci. Comparative analysis on population structure were carried out on putative neutral and outlier loci at wide (Atlantic and Mediterranean samples) and regional (samples within each basin) spatial scales, to disentangle the effects of demographic and adaptive evolutionary forces on European hake populations genetic structure. Results demonstrated the potential of outlier loci to unveil fine scale genetic structure, possibly identifying locally adapted populations, despite the weak signal showed from putative neutral SNPs. The application of outlier SNPs within the framework of fishery resources management was also explored. A minimum panel of SNP markers showing maximum discriminatory power was selected and applied to a traceability scenario aiming at identifying the basin (and hence the stock) of origin, Atlantic or Mediterranean, of individual fish. This case study illustrates how molecular analytical technologies have operational potential in real-world contexts, and more specifically, potential to support fisheries control and enforcement and fish and fish product traceability.
Resumo:
Background Through this paper, we present the initial steps for the creation of an integrated platform for the provision of a series of eHealth tools and services to both citizens and travelers in isolated areas of thesoutheast Mediterranean, and on board ships travelling across it. The platform was created through an INTERREG IIIB ARCHIMED project called INTERMED. Methods The support of primary healthcare, home care and the continuous education of physicians are the three major issues that the proposed platform is trying to facilitate. The proposed system is based on state-of-the-art telemedicine systems and is able to provide the following healthcare services: i) Telecollaboration and teleconsultation services between remotely located healthcare providers, ii) telemedicine services in emergencies, iii) home telecare services for "at risk" citizens such as the elderly and patients with chronic diseases, and iv) eLearning services for the continuous training through seminars of both healthcare personnel (physicians, nurses etc) and persons supporting "at risk" citizens. These systems support data transmission over simple phone lines, internet connections, integrated services digital network/digital subscriber lines, satellite links, mobile networks (GPRS/3G), and wireless local area networks. The data corresponds, among others, to voice, vital biosignals, still medical images, video, and data used by eLearning applications. The proposed platform comprises several systems, each supporting different services. These were integrated using a common data storage and exchange scheme in order to achieve system interoperability in terms of software, language and national characteristics. Results The platform has been installed and evaluated in different rural and urban sites in Greece, Cyprus and Italy. The evaluation was mainly related to technical issues and user satisfaction. The selected sites are, among others, rural health centers, ambulances, homes of "at-risk" citizens, and a ferry. Conclusions The results proved the functionality and utilization of the platform in various rural places in Greece, Cyprus and Italy. However, further actions are needed to enable the local healthcare systems and the different population groups to be familiarized with, and use in their everyday lives, mature technological solutions for the provision of healthcare services.
Resumo:
Advances in food transformation have dramatically increased the diversity of products on the market and, consequently, exposed consumers to a complex spectrum of bioactive nutrients whose potential risks and benefits have mostly not been confidently demonstrated. Therefore, tools are needed to efficiently screen products for selected physiological properties before they enter the market. NutriChip is an interdisciplinary modular project funded by the Swiss programme Nano-Tera, which groups scientists from several areas of research with the aim of developing analytical strategies that will enable functional screening of foods. The project focuses on postprandial inflammatory stress, which potentially contributes to the development of chronic inflammatory diseases. The first module of the NutriChip project is composed of three in vitro biochemical steps that mimic the digestion process, intestinal absorption, and subsequent modulation of immune cells by the bioavailable nutrients. The second module is a miniaturised form of the first module (gut-on-a-chip) that integrates a microfluidic-based cell co-culture system and super-resolution imaging technologies to provide a physiologically relevant fluid flow environment and allows sensitive real-time analysis of the products screened in vitro. The third module aims at validating the in vitro screening model by assessing the nutritional properties of selected food products in humans. Because of the immunomodulatory properties of milk as well as its amenability to technological transformation, dairy products have been selected as model foods. The NutriChip project reflects the opening of food and nutrition sciences to state-of-the-art technologies, a key step in the translation of transdisciplinary knowledge into nutritional advice.
Resumo:
In this project we developed conductive thermoplastic resins by adding varying amounts of three different carbon fillers: carbon black (CB), synthetic graphite (SG) and multi-walled carbon nanotubes (CNT) to a polypropylene matrix for application as fuel cell bipolar plates. This component of fuel cells provides mechanical support to the stack, circulates the gases that participate in the electrochemical reaction within the fuel cell and allows for removal of the excess heat from the system. The materials fabricated in this work were tested to determine their mechanical and thermal properties. These materials were produced by adding varying amounts of single carbon fillers to a polypropylene matrix (2.5 to 15 wt.% Ketjenblack EC-600 JD carbon black, 10 to 80 wt.% Asbury Carbon's Thermocarb TC-300 synthetic graphite, and 2.5 to 15 wt.% of Hyperion Catalysis International's FIBRILTM multi-walled carbon nanotubes) In addition, composite materials containing combinations of these three fillers were produced. The thermal conductivity results showed an increase in both through-plane and in-plane thermal conductivities, with the largest increase observed for synthetic graphite. The Department of Energy (DOE) had previously set a thermal conductivity goal of 20 W/m·K, which was surpassed by formulations containing 75 wt.% and 80 wt.% SG, yielding in-plane thermal conductivity values of 24.4 W/m·K and 33.6 W/m·K, respectively. In addition, composites containing 2.5 wt.% CB, 65 wt.% SG, and 6 wt.% CNT in PP had an in–plane thermal conductivity of 37 W/m·K. Flexural and tensile tests were conducted. All composite formulations exceeded the flexural strength target of 25 MPa set by DOE. The tensile and flexural modulus of the composites increased with higher concentration of carbon fillers. Carbon black and synthetic graphite caused a decrease in the tensile and flexural strengths of the composites. However, carbon nanotubes increased the composite tensile and flexural strengths. Mathematical models were applied to estimate through-plane and in-plane thermal conductivities of single and multiple filler formulations, and tensile modulus of single-filler formulations. For thermal conductivity, Nielsen's model yielded accurate thermal conductivity values when compared to experimental results obtained through the Flash method. For prediction of tensile modulus Nielsen's model yielded the smallest error between the predicted and experimental values. The second part of this project consisted of the development of a curriculum in Fuel Cell and Hydrogen Technologies to address different educational barriers identified by the Department of Energy. By the creation of new courses and enterprise programs in the areas of fuel cells and the use of hydrogen as an energy carrier, we introduced engineering students to the new technologies, policies and challenges present with this alternative energy. Feedback provided by students participating in these courses and enterprise programs indicate positive acceptance of the different educational tools. Results obtained from a survey applied to students after participating in these courses showed an increase in the knowledge and awareness of energy fundamentals, which indicates the modules developed in this project are effective in introducing students to alternative energy sources.
Resumo:
In Panama, one of the Environmental Health (EH) Sector’s primary goals is to improve the health of rural Panamanians by helping them to adopt behaviors and practices that improve access to and use of sanitation systems. In complying with this goal, the EH sector has used participatory development models to improve hygiene and increase access to latrines through volunteer managed latrine construction projects. Unfortunately, there is little understanding of the long term sustainability of these interventions after the volunteers have completed their service. With the Peace Corps adapting their Monitoring, Reporting, and Evaluation procedures, it is appropriate to evaluate the sustainability of sanitation interventions offering recommendations for the adaptions of the EH training program, project management, and evaluation procedures. Recognizing the need for evaluation of past latrine projects, the author performed a post project assessment of 19 pit latrine projects using participatory analysis methodologies. First, the author reviewed volunteers’ perspectives of pit latrine projects in a survey. Then, for comparison, the author performed a survey of latrine projects using a benchmarking scoring system to rate solid waste management, drainage, latrine siting, latrine condition, and hygiene. It was observed that the Sanitation WASH matrix created by the author was an effective tool for evaluating the efficacy of sanitation interventions. Overall more than 75%, of latrines constructed were in use. However, there were some areas where improvements could be made for both latrine construction and health and hygiene. The latrines scored poorly on the indicators related to the privacy structure and seat covers. Interestingly those are the two items least likely to be included in project subsidies. Furthermore, scores for hygiene-related indicators were low; particularly those related to hand washing and cleanliness of the kitchen, indicating potential for improvement in hygiene education. Based on these outcomes, the EH sector should consider including subsidies and standardized designs for privacy structures and seat covers for latrines. In addition, the universal adoption of contracts and/or deposits for project beneficiaries is expected to improve the completion of latrines. In order to address the low scores in the health and hygiene indicators, the EH sector should adapt volunteer training, in addition to standardizing health and hygiene intervention procedures. In doing so, the sector should mimic the Community Health Club model that has shown success in improving health and hygiene indicators, as well as use a training session plan format similar to those in the Water Committee Seminar manual. Finally, the sector should have an experienced volunteer dedicated to program oversight and post-project monitoring and evaluation.
Resumo:
Nonallergic hypersensitivity and allergic reactions are part of the many different types of adverse drug reactions (ADRs). Databases exist for the collection of ADRs. Spontaneous reporting makes up the core data-generating system of pharmacovigilance, but there is a large under-estimation of allergy/hypersensitivity drug reactions. A specific database is therefore required for drug allergy and hypersensitivity using standard operating procedures (SOPs), as the diagnosis of drug allergy/hypersensitivity is difficult and current pharmacovigilance algorithms are insufficient. Although difficult, the diagnosis of drug allergy/hypersensitivity has been standardized by the European Network for Drug Allergy (ENDA) under the aegis of the European Academy of Allergology and Clinical Immunology and SOPs have been published. Based on ENDA and Global Allergy and Asthma European Network (GA(2)LEN, EU Framework Programme 6) SOPs, a Drug Allergy and Hypersensitivity Database (DAHD((R))) has been established under FileMaker((R)) Pro 9. It is already available online in many different languages and can be accessed using a personal login. GA(2)LEN is a European network of 27 partners (16 countries) and 59 collaborating centres (26 countries), which can coordinate and implement the DAHD across Europe. The GA(2)LEN-ENDA-DAHD platform interacting with a pharmacovigilance network appears to be of great interest for the reporting of allergy/hypersensitivity ADRs in conjunction with other pharmacovigilance instruments.
Resumo:
eLearning supports the education in certain disciplines. Here, we report about novel eLearning concepts, techniques, and tools to support education in Software Engineering, a subdiscipline of computer science. We call this "Software Engineering eLearning". On the other side, software support is a substantial prerequisite for eLearning in any discipline. Thus, Software Engineering techniques have to be applied to develop and maintain those software systems. We call this "eLearning Software Engineering". Both aspects have been investigated in a large joint, BMBF-funded research project, termed MuSofT (Multimedia in Software Engineering). The main results are summarized in this paper.
Resumo:
The global World Overview of Conservation Approaches and Technologies (WOCAT) initiative has developed standardised tools and methods to compile and evaluate knowledge available about SLM. This knowledge is now combined and enriched with audiovisual information in order to give a voice to land users, reach a broad range of stakeholders, and assist in scaling up SLM to reverse trends of degradation, desertification, and drought. Five video products, adapted to the needs of different target groups, are created and embedded in already existing platforms for knowledge sharing of SLM such as the WOCAT database and Google Earth application. A pilot project was carried out in Kenya and Tajikistan to verify ideas and tools while at the same time assessing the usefulness of the suggested products on the ground. Video has the potential to bridge the gap between different actor groups and enable communication and sharing on different levels and scales: locally, regionally, and globally. Furthermore, it is an innovative tool to link local and scientific knowledge, raise awareness, and support advocacy for SLM. Keywords: Sustainable Land Management (SLM), knowledge sharing, audiovisual messages, video, World Overview of Conservation Approaches and Technologies (WOCAT)
Resumo:
Tajikistan is particularly exposed to the risks of climate change. Its widely degraded landscapes are badly prepared to cope with changes in precipitation patterns, increased temperatures, droughts, and the spread of pests and disease. Sustainable land management (SLM) provides a “basket of opportunities” to address these challenges, particularly for increasing land productivity, improving livelihoods, and protecting ecosystems. Within the Pilot Program for Climate Resilience (PPCR) in Tajikistan 70 SLM technologies and approaches on how to implement SLM were documented with the World Overview of Conservation Approaches and Technologies (WOCAT ) tools in 2011. For this purpose a climate change adaptation module was developed and tested in order to enhance the understanding about climate change resilience of SLM practices and community workshops conducted to on adaptation mechanisms by rural communities in Tajikistan. The analysis came up with four guiding principles for applying SLM for adapting to climate change: 1. Diversification of land use technologies and farm incomes; 2. Intensification of use of natural resources; 3. Expansion of highly productive land use technologies; 4. Protection of land and livelihoods from extreme weather events. Furthermore, SLM must be up-scaled from isolated plots to entire zones or landscapes and the project developed the concept of three concentric villages zones, the in-, near- and off-village zones. Land users, advisors, and decision- and policy makers face the task of finding management practices that best suit site-specific conditions. This task is most efficiently addressed in collaborative effort, and building up and managing a respective knowledge platform.
Resumo:
The Business and Information Technologies (BIT) project strives to reveal new insights into how modern IT impacts organizational structures and business practices using empirical methods. Due to its international scope, it allows for inter-country comparison of empirical results. Germany — represented by the European School of Management and Technologies (ESMT) and the Institute of Information Systems at Humboldt-Universität zu Berlin — joined the BIT project in 2006. This report presents the result of the first survey conducted in Germany during November–December 2006. The key results are as follows: • The most widely adopted technologies and systems in Germany are websites, wireless hardware and software, groupware/productivity tools, and enterprise resource planning (ERP) systems. The biggest potential for growth exists for collaboration and portal tools, content management systems, business process modelling, and business intelligence applications. A number of technological solutions have not yet been adopted by many organizations but also bear some potential, in particular identity management solutions, Radio Frequency Identification (RFID), biometrics, and third-party authentication and verification. • IT security remains on the top of the agenda for most enterprises: budget spending was increasing in the last 3 years. • The workplace and work requirements are changing. IT is used to monitor employees' performance in Germany, but less heavily compared to the United States (Karmarkar and Mangal, 2007).1 The demand for IT skills is increasing at all corporate levels. Executives are asking for more and better structured information and this, in turn, triggers the appearance of new decision-making tools and online technologies on the market. • The internal organization of companies in Germany is underway: organizations are becoming flatter, even though the trend is not as pronounced as in the United States (Karmarkar and Mangal, 2007), and the geographical scope of their operations is increasing. Modern IT plays an important role in enabling this development, e.g. telecommuting, teleconferencing, and other web-based collaboration formats are becoming increasingly popular in the corporate context. • The degree to which outsourcing is being pursued is quite limited with little change expected. IT services, payroll, and market research are the most widely outsourced business functions. This corresponds to the results from other countries. • Up to now, the adoption of e-business technologies has had a rather limited effect on marketing functions. Companies tend to extract synergies from traditional printed media and on-line advertising. • The adoption of e-business has not had a major impact on marketing capabilities and strategy yet. Traditional methods of customer segmentation are still dominating. The corporate identity of most organizations does not change significantly when going online. • Online sales channel are mainly viewed as a complement to the traditional distribution means. • Technology adoption has caused production and organizational costs to decrease. However, the costs of technology acquisition and maintenance as well as consultancy and internal communication costs have increased.
Resumo:
The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5$\sp\circ$ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1$\sp\circ$ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross-correlation technique were implemented within an experimental radiotherapy picture archival and communication system (RT-PACS) and were used clinically to evaluate the setup variability of two groups of cancer patients treated with and without an alpha-cradle immobilization aid. The tools developed in this project have proven to be very effective and have played an important role in detecting patient alignment errors and field-shape errors in treatment fields formed by a multileaf collimator (MLC). ^