943 resultados para Data envelopment analysis (DEA).
Resumo:
A first phase of the research activity has been related to the study of the state of art of the infrastructures for cycling, bicycle use and methods for evaluation. In this part, the candidate has studied the "bicycle system" in countries with high bicycle use and in particular in the Netherlands. Has been carried out an evaluation of the questionnaires of the survey conducted within the European project BICY on mobility in general in 13 cities of the participating countries. The questionnaire was designed, tested and implemented, and was later validated by a test in Bologna. The results were corrected with information on demographic situation and compared with official data. The cycling infrastructure analysis was conducted on the basis of information from the OpenStreetMap database. The activity consisted in programming algorithms in Python that allow to extract data from the database infrastructure for a region, to sort and filter cycling infrastructure calculating some attributes, such as the length of the arcs paths. The results obtained were compared with official data where available. The structure of the thesis is as follows: 1. Introduction: description of the state of cycling in several advanced countries, description of methods of analysis and their importance to implement appropriate policies for cycling. Supply and demand of bicycle infrastructures. 2. Survey on mobility: it gives details of the investigation developed and the method of evaluation. The results obtained are presented and compared with official data. 3. Analysis cycling infrastructure based on information from the database of OpenStreetMap: describes the methods and algorithms developed during the PhD. The results obtained by the algorithms are compared with official data. 4. Discussion: The above results are discussed and compared. In particular the cycle demand is compared with the length of cycle networks within a city. 5. Conclusions
Resumo:
In this work we will discuss about a project started by the Emilia-Romagna Regional Government regarding the manage of the public transport. In particular we will perform a data mining analysis on the data-set of this project. After introducing the Weka software used to make our analysis, we will discover the most useful data mining techniques and algorithms; and we will show how these results can be used to violate the privacy of the same public transport operators. At the end, despite is off topic of this work, we will spend also a few words about how it's possible to prevent this kind of attack.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
Multiple outcomes data are commonly used to characterize treatment effects in medical research, for instance, multiple symptoms to characterize potential remission of a psychiatric disorder. Often either a global, i.e. symptom-invariant, treatment effect is evaluated. Such a treatment effect may over generalize the effect across the outcomes. On the other hand individual treatment effects, varying across all outcomes, are complicated to interpret, and their estimation may lose precision relative to a global summary. An effective compromise to summarize the treatment effect may be through patterns of the treatment effects, i.e. "differentiated effects." In this paper we propose a two-category model to differentiate treatment effects into two groups. A model fitting algorithm and simulation study are presented, and several methods are developed to analyze heterogeneity presenting in the treatment effects. The method is illustrated using an analysis of schizophrenia symptom data.
Resumo:
Vietnam has developed rapidly over the past 15 years. However, progress was not uniformly distributed across the country. Availability, adequate visualization and analysis of spatially explicit data on socio-economic and environmental aspects can support both research and policy towards sustainable development. Applying appropriate mapping techniques allows gleaning important information from tabular socio-economic data. Spatial analysis of socio-economic phenomena can yield insights into locally-specifi c patterns and processes that cannot be generated by non-spatial applications. This paper presents techniques and applications that develop and analyze spatially highly disaggregated socioeconomic datasets. A number of examples show how such information can support informed decisionmaking and research in Vietnam.
Resumo:
BACKGROUND Preterm birth, low birth weight, and infant catch-up growth seem associated with an increased risk of respiratory diseases in later life, but individual studies showed conflicting results. OBJECTIVES We performed an individual participant data meta-analysis for 147,252 children of 31 birth cohort studies to determine the associations of birth and infant growth characteristics with the risks of preschool wheezing (1-4 years) and school-age asthma (5-10 years). METHODS First, we performed an adjusted 1-stage random-effect meta-analysis to assess the combined associations of gestational age, birth weight, and infant weight gain with childhood asthma. Second, we performed an adjusted 2-stage random-effect meta-analysis to assess the associations of preterm birth (gestational age <37 weeks) and low birth weight (<2500 g) with childhood asthma outcomes. RESULTS Younger gestational age at birth and higher infant weight gain were independently associated with higher risks of preschool wheezing and school-age asthma (P < .05). The inverse associations of birth weight with childhood asthma were explained by gestational age at birth. Compared with term-born children with normal infant weight gain, we observed the highest risks of school-age asthma in children born preterm with high infant weight gain (odds ratio [OR], 4.47; 95% CI, 2.58-7.76). Preterm birth was positively associated with an increased risk of preschool wheezing (pooled odds ratio [pOR], 1.34; 95% CI, 1.25-1.43) and school-age asthma (pOR, 1.40; 95% CI, 1.18-1.67) independent of birth weight. Weaker effect estimates were observed for the associations of low birth weight adjusted for gestational age at birth with preschool wheezing (pOR, 1.10; 95% CI, 1.00-1.21) and school-age asthma (pOR, 1.13; 95% CI, 1.01-1.27). CONCLUSION Younger gestational age at birth and higher infant weight gain were associated with childhood asthma outcomes. The associations of lower birth weight with childhood asthma were largely explained by gestational age at birth.
Resumo:
Until today, most of the documentation of forensic relevant medical findings is limited to traditional 2D photography, 2D conventional radiographs, sketches and verbal description. There are still some limitations of the classic documentation in forensic science especially if a 3D documentation is necessary. The goal of this paper is to demonstrate new 3D real data based geo-metric technology approaches. This paper present approaches to a 3D geo-metric documentation of injuries on the body surface and internal injuries in the living and deceased cases. Using modern imaging methods such as photogrammetry, optical surface and radiological CT/MRI scanning in combination it could be demonstrated that a real, full 3D data based individual documentation of the body surface and internal structures is possible in a non-invasive and non-destructive manner. Using the data merging/fusing and animation possibilities, it is possible to answer reconstructive questions of the dynamic development of patterned injuries (morphologic imprints) and to evaluate the possibility, that they are matchable or linkable to suspected injury-causing instruments. For the first time, to our knowledge, the method of optical and radiological 3D scanning was used to document the forensic relevant injuries of human body in combination with vehicle damages. By this complementary documentation approach, individual forensic real data based analysis and animation were possible linking body injuries to vehicle deformations or damages. These data allow conclusions to be drawn for automobile accident research, optimization of vehicle safety (pedestrian and passenger) and for further development of crash dummies. Real 3D data based documentation opens a new horizon for scientific reconstruction and animation by bringing added value and a real quality improvement in forensic science.
Resumo:
OBJECTIVES The purpose of the study was to provide empirical evidence about the reporting of methodology to address missing outcome data and the acknowledgement of their impact in Cochrane systematic reviews in the mental health field. METHODS Systematic reviews published in the Cochrane Database of Systematic Reviews after January 1, 2009 by three Cochrane Review Groups relating to mental health were included. RESULTS One hundred ninety systematic reviews were considered. Missing outcome data were present in at least one included study in 175 systematic reviews. Of these 175 systematic reviews, 147 (84%) accounted for missing outcome data by considering a relevant primary or secondary outcome (e.g., dropout). Missing outcome data implications were reported only in 61 (35%) systematic reviews and primarily in the discussion section by commenting on the amount of the missing outcome data. One hundred forty eligible meta-analyses with missing data were scrutinized. Seventy-nine (56%) of them had studies with total dropout rate between 10 and 30%. One hundred nine (78%) meta-analyses reported to have performed intention-to-treat analysis by including trials with imputed outcome data. Sensitivity analysis for incomplete outcome data was implemented in less than 20% of the meta-analyses. CONCLUSIONS Reporting of the techniques for handling missing outcome data and their implications in the findings of the systematic reviews are suboptimal.
Resumo:
Abstract interpretation-based data-flow analysis of logic programs is, at this point, relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Existing proposals generally restrict in one way or another the classes of programs which can be analyzed. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially the recent ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs which use the full power of the language.
Resumo:
Global data-flow analysis of (constraint) logic programs, which is generally based on abstract interpretation [7], is reaching a comparatively high level of maturity. A natural question is whether it is time for its routine incorporation in standard compilers, something which, beyond a few experimental systems, has not happened to date. Such incorporation arguably makes good sense only if: • the range of applications of global analysis is large enough to justify the additional complication in the compiler, and • global analysis technology can deal with all the features of "practical" languages (e.g., the ISO-Prolog built-ins) and "scales up" for large programs. We present a tutorial overview of a number of concepts and techniques directly related to the issues above, with special emphasis on the first one. In particular, we concéntrate on novel uses of global analysis during program development and debugging, rather than on the more traditional application área of program optimization. The idea of using abstract interpretation for validation and diagnosis has been studied in the context of imperative programming [2] and also of logic programming. The latter work includes issues such as using approximations to reduce the burden posed on programmers by declarative debuggers [6, 3] and automatically generating and checking assertions [4, 5] (which includes the more traditional type checking of strongly typed languages, such as Gódel or Mercury [1, 8, 9]) We also review some solutions for scalability including modular analysis, incremental analysis, and widening. Finally, we discuss solutions for dealing with meta-predicates, side-effects, delay declarations, constraints, dynamic predicates, and other such features which may appear in practical languages. In the discussion we will draw both from the literature and from our experience and that of others in the development and use of the CIAO system analyzer. In order to emphasize the practical aspects of the solutions discussed, the presentation of several concepts will be illustrated by examples run on the CIAO system, which makes extensive use of global analysis and assertions.
Resumo:
ACKNOWLEDGMENTS We thank the HIV nurses and physicians from the two HIV clinics involved in this study (Academic Medical Center, Amsterdam; Erasmus Medical Center, Rotterdam) for their input and collaboration. We also express our gratitude to the participating patients. Finally, we thank Nicolette Strik-Mulder for her help with transcribing the audio recordings. FUNDING This study was funded by ZonMw (the Netherlands), program “Doelmatigheidsonderzoek” (grant 171002208). This funding source had no role in study design, data collection, analysis, interpretation, or writing of the report.
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Remotely sensed data have been used extensively for environmental monitoring and modeling at a number of spatial scales; however, a limited range of satellite imaging systems often. constrained the scales of these analyses. A wider variety of data sets is now available, allowing image data to be selected to match the scale of environmental structure(s) or process(es) being examined. A framework is presented for use by environmental scientists and managers, enabling their spatial data collection needs to be linked to a suitable form of remotely sensed data. A six-step approach is used, combining image spatial analysis and scaling tools, within the context of hierarchy theory. The main steps involved are: (1) identification of information requirements for the monitoring or management problem; (2) development of ideal image dimensions (scene model), (3) exploratory analysis of existing remotely sensed data using scaling techniques, (4) selection and evaluation of suitable remotely sensed data based on the scene model, (5) selection of suitable spatial analytic techniques to meet information requirements, and (6) cost-benefit analysis. Results from a case study show that the framework provided an objective mechanism to identify relevant aspects of the monitoring problem and environmental characteristics for selecting remotely sensed data and analysis techniques.
Resumo:
We have constructed cDNA microarrays for soybean (Glycine max L. Merrill), containing approximately 4,100 Unigene ESTs derived from axenic roots, to evaluate their application and utility for functional genomics of organ differentiation in legumes. We assessed microarray technology by conducting studies to evaluate the accuracy of microarray data and have found them to be both reliable and reproducible in repeat hybridisations. Several ESTs showed high levels (>50 fold) of differential expression in either root or shoot tissue of soybean. A small number of physiologically interesting, and differentially expressed sequences found by microarray analysis were verified by both quantitative real-time RT-PCR and Northern blot analysis. There was a linear correlation (r(2) = 0.99, over 5 orders of magnitude) between microarray and quantitative real-time RT-PCR data. Microarray analysis of soybean has enormous potential not only for the discovery of new genes involved in tissue differentiation and function, but also to study the expression of previously characterised genes, gene networks and gene interactions in wild-type, mutant or transgenic; plants.