934 resultados para Research Methodologies
Resumo:
The Economic Commission for Latin America and the Caribbean (ECLAC), Subregional Headquarters for the Caribbean convened an expert group meeting on Social Exclusion, Poverty, Inequality – Crime and Violence: Towards a Research Agenda for informed Public Policy for Caribbean SIDS on Friday 4 April 2008, at its conference room in Port of Spain. The meeting was attended by 14 experts drawn from, the University of the West Indies (UWI), St. Augustine, Trinidad and Tobago; and Mona Campus, Jamaica; the St. Georges University, Grenada; the Trinidad and Tobago Crime Commission and the Ministry of Social Development, Government of Trinidad and Tobago and representative of Civil Society from Guyana. Experts from the United Nations System included representatives from the United Nations Fund for Women (UNIFEM), Barbados; the United Nations Development Programme (UNDP), Port of Spain and UNDP Barbados/SRO and the Organisation of Eastern Caribbean States (OECS). The list of participants appears as an annex to this report. The purpose of the meeting was to provide a forum in which differing theories and methodologies useful to addressing the issues of social exclusion, poverty, inequality, crime and violence could be explored. It was expected that at the end of the meeting there would be consensus on areas of research which could be pursued over a two to four-year period by the ECLAC Subregional Headquarters for the Caribbean and its partners, which would lead to informed public policy in support of the reduction of the growing violence in Caribbean society.
Resumo:
The objective of this study was to evaluate the accuracy of the faecal egg count reduction test (FECRT) and the faecal egg count efficacy test (FECET) to assess the resistance status of ivermectin (630 mu g/g) and moxidectin (200 mu g/kg), using the controlled efficacy test as a reference, and whether the results of the EPG are equivalent to the efficacy results from the parasitological necropsies. Two experiments were conducted. The results demonstrate that it was not possible to demonstrate that the EPG values were equivalent with the ivermectin and moxidectin efficacy obtained by parasitological necropsies, mainly if the phenomenon of parasites resistance is not advanced in a determined field population. Maybe the FECET technique would be possibly better than the FECRT. The high anthelmintic efficacy of 200 mu g/kg moxidectin, in naturally infected cattle, against field population of nematodes that are resistant to 630 mu g/kg ivermectin, was observed in this study. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The tactile cartography is an area of Cartography that aims the development of methodologies and didactical material to work cartographic concepts with blind and low vision people. The main aim of this article is to present the experience of Tactile Cartography Research Group from Sao Paulo State University (UNESP), including some didactical material and courses for teachers using the System MAPAVOX. The System MAPAVOX is software developed by our research group in a partnership with Federal University of Rio de Janeiro (UFRJ) that integrates maps and models with a voice synthesizer, sound emission, texts, images and video visualizing for computers. Our research methodology is based in authors that have in the students the centre of didactical activity such as Ochaita and Espinosa in [1], which developed studies related to blind children's literacy. According to Almeida the child's drawing is, thus, a system of representation. It isn't a copy of objects, but interpretation of that which is real, done by the child in graphic language[2]. In the proposed activities with blind and low vision students they are prepared to interpret reality and represent it by adopting concepts of graphic language learned. To start the cartographic initialization it is necessary to use personal and quotidian references, for example the classroom tactile model or map, to include concepts in generalization and scale concerning to their space of life. During these years many case studies were developed with blind and low vision students from Special School for Hearing Impaired and Visually Impaired in Araras and Rio Claro, Sao Paulo - Brazil. The most part of these experiences and others from Brazil and Chile are presented in [3]. Tactile material and MAPAVOX facilities are analysed by students and teachers who contribute with suggestions to reformulate and adapt them to their sensibility and necessity. Since 2005 we offer courses in Tactile Cartography to prepare teachers from elementary school in the manipulation of didactical material and attending students with special educational needs in regular classroom. There were 6 classroom and blended courses offered for 184 teachers from public schools in this region of the Sao Paulo state. As conclusion we can observe that methodological procedures centred in the blind and low vision students are successful in their spatial orientation if use didactical material from places or objects with which they have significant experience. During the applying of courses for teachers we could see that interdisciplinary groups can find creative cartographic alternatives more easily. We observed too that the best results in methodological procedures were those who provided concreteness to abstract concepts using daily experiences.
Resumo:
Visual correspondence is a key computer vision task that aims at identifying projections of the same 3D point into images taken either from different viewpoints or at different time instances. This task has been the subject of intense research activities in the last years in scenarios such as object recognition, motion detection, stereo vision, pattern matching, image registration. The approaches proposed in literature typically aim at improving the state of the art by increasing the reliability, the accuracy or the computational efficiency of visual correspondence algorithms. The research work carried out during the Ph.D. course and presented in this dissertation deals with three specific visual correspondence problems: fast pattern matching, stereo correspondence and robust image matching. The dissertation presents original contributions to the theory of visual correspondence, as well as applications dealing with 3D reconstruction and multi-view video surveillance.
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
Resumo:
Monitoring foetal health is a very important task in clinical practice to appropriately plan pregnancy management and delivery. In the third trimester of pregnancy, ultrasound cardiotocography is the most employed diagnostic technique: foetal heart rate and uterine contractions signals are simultaneously recorded and analysed in order to ascertain foetal health. Because ultrasound cardiotocography interpretation still lacks of complete reliability, new parameters and methods of interpretation, or alternative methodologies, are necessary to further support physicians’ decisions. To this aim, in this thesis, foetal phonocardiography and electrocardiography are considered as different techniques. Further, variability of foetal heart rate is thoroughly studied. Frequency components and their modifications can be analysed by applying a time-frequency approach, for a distinct understanding of the spectral components and their change over time related to foetal reactions to internal and external stimuli (such as uterine contractions). Such modifications of the power spectrum can be a sign of autonomic nervous system reactions and therefore represent additional, objective information about foetal reactivity and health. However, some limits of ultrasonic cardiotocography still remain, such as in long-term foetal surveillance, which is often recommendable mainly in risky pregnancies. In these cases, the fully non-invasive acoustic recording, foetal phonocardiography, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the so recorded foetal heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. A new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings is presented in this thesis. Different filtering and enhancement techniques, to enhance the first foetal heart sounds, were applied, so that different signal processing techniques were implemented, evaluated and compared, by identifying the strategy characterized on average by the best results. In particular, phonocardiographic signals were recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by the developed algorithm and the other provided by cardiotocographic device). The algorithm performances were tested on phonocardiographic signals recorded on pregnant women, showing reliable foetal heart rate signals, very close to the ultrasound cardiotocographic recordings, considered as reference. The algorithm was also tested by using a foetal phonocardiographic recording simulator developed and presented in this research thesis. The target was to provide a software for simulating recordings relative to different foetal conditions and recordings situations and to use it as a test tool for comparing and assessing different foetal heart rate extraction algorithms. Since there are few studies about foetal heart sounds time characteristics and frequency content and the available literature is poor and not rigorous in this area, a data collection pilot study was also conducted with the purpose of specifically characterising both foetal and maternal heart sounds. Finally, in this thesis, the use of foetal phonocardiographic and electrocardiographic methodology and their combination, are presented in order to detect foetal heart rate and other functioning anomalies. The developed methodologies, suitable for longer-term assessment, were able to detect heart beat events correctly, such as first and second heart sounds and QRS waves. The detection of such events provides reliable measures of foetal heart rate, potentially information about measurement of the systolic time intervals and foetus circulatory impedance.
Resumo:
It has been demonstrated that iodine does have an important influence on atmospheric chemistry, especially the formation of new particles and the enrichment of iodine in marine aerosols. It was pointed out that the most probable chemical species involved in the production or growth of these particles are iodine oxides, produced photochemically from biogenic halocarbon emissions and/or iodine emission from the sea surface. However, the iodine chemistry from gaseous to particulate phase in the coastal atmosphere and the chemical nature of the condensing iodine species are still not understood. A Tenax / Carbotrap adsorption sampling technique and a thermo-desorption / cryo-trap / GC-MS system has been further developed and improved for the volatile organic iodine species in the gas phase. Several iodo-hydrocarbons such as CH3I, C2H5I, CH2ICl, CH2IBr and CH2I2 etc., have been measured in samples from a calibration test gas source (standards), real air samples and samples from seaweeds / macro-algae emission experiments. A denuder sampling technique has been developed to characterise potential precursor compounds of coastal particle formation processes, such as molecular iodine in the gas phase. Starch, TMAH (TetraMethylAmmonium Hydroxide) and TBAH (TetraButylAmmonium Hydroxide) coated denuders were tested for their efficiencies to collect I2 at the inner surface, followed by a TMAH extraction and ICP/MS determination, adding tellurium as an internal standard. The developed method has been proved to be an effective, accurate and suitable process for I2 measurement in the field, with the estimated detection limit of ~0.10 ng∙L-1 for a sampling volume of 15 L. An H2O/TMAH-Extraction-ICP/MS method has been developed for the accurate and sensitive determination of iodine species in tropospheric aerosol particles. The particle samples were collected on cellulose-nitrate filters using conventional filter holders or on cellulose nitrate/tedlar-foils using a 5-stage Berner impactor for size-segregated particle analysis. The water soluble species as IO3- and I- were separated by anion exchanging process after water extraction. Non-water soluble species including iodine oxide and organic iodine were digested and extracted by TMAH. Afterwards the triple samples were analysed by ICP/MS. The detection limit for particulate iodine was determined to be 0.10~0.20 ng•m-3 for sampling volumes of 40~100 m3. The developed methods have been used in two field measurements in May 2002 and September 2003, at and around the Mace Head Atmospheric Research Station (MHARS) located at the west coast of Ireland. Elemental iodine as a precursor of the iodine chemistry in the coastal atmosphere, was determined in the gas phase at a seaweed hot-spot around the MHARS, showing I2 concentrations were in the range of 0~1.6 ng∙L-1 and indicating a positive correlation with the ozone concentration. A seaweed-chamber experiment performed at the field measurement station showed that the I2 emission rate from macro-algae was in the range of 0.019~0.022 ng•min-1•kg-1. During these experiments, nanometer-particle concentrations were obtained from the Scanning Mobility Particle Sizer (SMPS) measurements. Particle number concentrations were found to have a linear correlation with elemental iodine in the gas phase of the seaweeds chamber, showing that gaseous I2 is one of the important precursors of the new particle formation in the coastal atmosphere. Iodine contents in the particle phase were measured in both field campaigns at and around the field measurement station. Total iodine concentrations were found to be in the range of 1.0 ~ 21.0 ng∙m-3 in the PM2.5 samples. A significant correlation between the total iodine concentrations and the nanometer-particle number concentrations was observed. The particulate iodine species analysis indicated that iodide contents are usually higher than those of iodate in all samples, with ratios in the range of 2~5:1. It is possible that those water soluble iodine species are transferred through the sea-air interface into the particle phase. The ratio of water soluble (iodate + iodide) and non-water soluble species (probably iodine oxide and organic iodine compounds) was observed to be in the range of 1:1 to 1:2. It appears that higher concentrated non-water soluble species, as the products of the photolysis from the gas phase into the particle phase, can be obtained in those samples while the nucleation events occur. That supports the idea that iodine chemistry in the coastal boundary layer is linked with new particle formation events. Furthermore, artificial aerosol particles were formed from gaseous iodine sources (e.g. CH2I2) using a laboratory reaction-chamber experiment, in which the reaction constant of the CH2I2 photolysis was calculated to be based upon the first order reaction kinetic. The end products of iodine chemistry in the particle phase were identified and quantified.
Resumo:
This study focuses on the use of metabonomics applications in measuring fish freshness in various biological species and in evaluating how they are stored. This metabonomic approach is innovative and is based upon molecular profiling through nuclear magnetic resonance (NMR). On one hand, the aim is to ascertain if a type of fish has maintained, within certain limits, its sensory and nutritional characteristics after being caught; and on the second, the research observes the alterations in the product’s composition. The spectroscopic data obtained through experimental nuclear magnetic resonance, 1H-NMR, of the molecular profiles of the fish extracts are compared with those obtained on the same samples through analytical and conventional methods now in practice. These second methods are used to obtain chemical indices of freshness through biochemical and microbial degradation of the proteic nitrogen compounds and not (trimethylamine, N-(CH3)3, nucleotides, amino acids, etc.). At a later time, a principal components analysis (PCA) and a linear discriminant analysis (PLS-DA) are performed through a metabonomic approach to condense the temporal evolution of freshness into a single parameter. In particular, the first principal component (PC1) under both storage conditions (4 °C and 0 °C) represents the component together with the molecular composition of the samples (through 1H-NMR spectrum) evolving during storage with a very high variance. The results of this study give scientific evidence supporting the objective elements evaluating the freshness of fish products showing those which can be labeled “fresh fish.”
Resumo:
MFA and LCA methodologies were applied to analyse the anthropogenic aluminium cycle in Italy with focus on historical evolution of stocks and flows of the metal, embodied GHG emissions, and potentials from recycling to provide key features to Italy for prioritizing industrial policy toward low-carbon technologies and materials. Historical trend series were collected from 1947 to 2009 and balanced with data from production, manufacturing and waste management of aluminium-containing products, using a ‘top-down’ approach to quantify the contemporary in-use stock of the metal, and helping to identify ‘applications where aluminium is not yet being recycled to its full potential and to identify present and future recycling flows’. The MFA results were used as a basis for the LCA aimed at evaluating the carbon footprint evolution, from primary and electrical energy, the smelting process and the transportation, embodied in the Italian aluminium. A discussion about how the main factors, according to the Kaya Identity equation, they did influence the Italian GHG emissions pattern over time, and which are the levers to mitigate it, it has been also reported. The contemporary anthropogenic reservoirs of aluminium was estimated at about 320 kg per capita, mainly embedded within the transportation and building and construction sectors. Cumulative in-use stock represents approximately 11 years of supply at current usage rates (about 20 Mt versus 1.7 Mt/year), and it would imply a potential of about 160 Mt of CO2eq emissions savings. A discussion of criticality related to aluminium waste recovery from the transportation and the containers and packaging sectors was also included in the study, providing an example for how MFA and LCA may support decision-making at sectorial or regional level. The research constitutes the first attempt of an integrated approach between MFA and LCA applied to the aluminium cycle in Italy.
Resumo:
This paper summarises the discussions which took place at the Workshop on Methodology in Erosion Research in Zürich, 2010, and aims, where possible, to offer guidance for the development and application of both in vitro and in situ models for erosion research. The prospects for clinical trials are also discussed. All models in erosion research require a number of choices regarding experimental conditions, study design and measurement techniques, and these general aspects are discussed first. Among in vitro models, simple (single- or multiple-exposure) models can be used for screening products regarding their erosive potential, while more elaborate pH cycling models can be used to simulate erosion in vivo. However, in vitro models provide limited information on intra-oral erosion. In situ models allow the effect of an erosive challenge to be evaluated under intra-oral conditions and are currently the method of choice for short-term testing of low-erosive products or preventive therapeutic products. In the future, clinical trials will allow longer-term testing. Possible methodologies for such trials are discussed.
Resumo:
While scientific research and the methodologies involved have gone through substantial technological evolution the technology involved in the publication of the results of these endeavors has remained relatively stagnant. Publication is largely done in the same manner today as it was fifty years ago. Many journals have adopted electronic formats, however, their orientation and style is little different from a printed document. The documents tend to be static and take little advantage of computational resources that might be available. Recent work, Gentleman and Temple Lang (2004), suggests a methodology and basic infrastructure that can be used to publish documents in a substantially different way. Their approach is suitable for the publication of papers whose message relies on computation. Stated quite simply, Gentleman and Temple Lang propose a paradigm where documents are mixtures of code and text. Such documents may be self-contained or they may be a component of a compendium which provides the infrastructure needed to provide access to data and supporting software. These documents, or compendiums, can be processed in a number of different ways. One transformation will be to replace the code with its output -- thereby providing the familiar, but limited, static document. In this paper we apply these concepts to a seminal paper in bioinformatics, namely The Molecular Classification of Cancer, Golub et al. (1999). The authors of that paper have generously provided data and other information that have allowed us to largely reproduce their results. Rather than reproduce this paper exactly we demonstrate that such a reproduction is possible and instead concentrate on demonstrating the usefulness of the compendium concept itself.
Resumo:
The integration of academic and non-academic knowledge is a key concern for researchers who aim at bridging the gap between research and policy. Researchers involved in the sustainability-oriented NCCR North-South programme have made the experience that linking different types of knowledge requires time and effort, and that methodologies are still lacking. One programme component was created at the inception of this transdisciplinary research programme to support exchange between researchers, development practitioners and policymakers. After 8 years of research, the programme is assessing whether research has indeed enabled a continuous communication across and beyond academic boundaries and has effected changes in the public policies of poor countries. In a first review of the data, we selected two case studies explicitly addressing the lives of women. In both cases – one in Pakistan, the other in Nepal – the dialogue between researchers and development practitioners contributed to important policy changes for female migration. In both countries, outmigration has become an increasingly important livelihood strategy. National migration policies are gendered, limiting the international migration of women. In Nepal, women were not allowed to migrate to specific countries such as the Gulf States or Malaysia. This was done in the name of positive discrimination, to protect women from potential exploitation and harassment in domestic work. However, women continued to migrate in many other and often illegal and more risky ways, increasing their vulnerability. In Pakistan, female labour migration was not allowed at all and male migration increased the vulnerability of the families remaining back home. Researchers and development practitioners in Nepal and Pakistan brought women’s shared experience of and exposure to the mechanisms of male domination into the public debate, and addressed the discriminating laws. Now, for the first time in Pakistan, the new draft policy currently under discussion would enable broadly-based female labour migration. What can we learn from the two case studies with regard to ways of relating experience- and research-based knowledge? The paper offers insights into the sequence of interactions between researchers, local people, development practitioners, and policy-makers, which eventually contributed to the formulation of a rights-based migration policy. The reflection aims at exploring the gendered dimension of ways to co-produce and share knowledge for development across boundaries. Above all, it should help researchers to better tighten the links between the spheres of research and policy in future.
Resumo:
The title ‘Frontiers of Social Research’ implies a pioneering spirit, embarking upon unchartered territories. However, the most fascinating and insightful moments of this book are those which explore age-old Japanese research techniques and the potential for new methodologies to look to the old. The key theme of the work is the role of the researcher and the researcher’s relationships with research participants, the research audience and with knowledge itself.
Resumo:
This paper presents an overview of the Mobile Data Challenge (MDC), a large-scale research initiative aimed at generating innovations around smartphone-based research, as well as community-based evaluation of mobile data analysis methodologies. First, we review the Lausanne Data Collection Campaign (LDCC), an initiative to collect unique longitudinal smartphone dataset for the MDC. Then, we introduce the Open and Dedicated Tracks of the MDC, describe the specific datasets used in each of them, discuss the key design and implementation aspects introduced in order to generate privacy-preserving and scientifically relevant mobile data resources for wider use by the research community, and summarize the main research trends found among the 100+ challenge submissions. We finalize by discussing the main lessons learned from the participation of several hundred researchers worldwide in the MDC Tracks.
Resumo:
Recent findings related to childhood leukaemia incidence near nuclear installations have raised questions which can be answered neither by current knowledge on radiation risk nor by other established risk factors. In 2012, a workshop was organised on this topic with two objectives: (a) review of results and discussion of methodological limitations of studies near nuclear installations; (b) identification of directions for future research into the causes and pathogenesis of childhood leukaemia. The workshop gathered 42 participants from different disciplines, extending widely outside of the radiation protection field. Regarding the proximity of nuclear installations, the need for continuous surveillance of childhood leukaemia incidence was highlighted, including a better characterisation of the local population. The creation of collaborative working groups was recommended for consistency in methodologies and the possibility of combining data for future analyses. Regarding the causes of childhood leukaemia, major fields of research were discussed (environmental risk factors, genetics, infections, immunity, stem cells, experimental research). The need for multidisciplinary collaboration in developing research activities was underlined, including the prevalence of potential predisposition markers and investigating further the infectious aetiology hypothesis. Animal studies and genetic/epigenetic approaches appear of great interest. Routes for future research were pointed out.