913 resultados para Video interaction analysis : methods and methodology
Resumo:
Abstract Background Identification of nontuberculous mycobacteria (NTM) based on phenotypic tests is time-consuming, labor-intensive, expensive and often provides erroneous or inconclusive results. In the molecular method referred to as PRA-hsp65, a fragment of the hsp65 gene is amplified by PCR and then analyzed by restriction digest; this rapid approach offers the promise of accurate, cost-effective species identification. The aim of this study was to determine whether species identification of NTM using PRA-hsp65 is sufficiently reliable to serve as the routine methodology in a reference laboratory. Results A total of 434 NTM isolates were obtained from 5019 cultures submitted to the Institute Adolpho Lutz, Sao Paulo Brazil, between January 2000 and January 2001. Species identification was performed for all isolates using conventional phenotypic methods and PRA-hsp65. For isolates for which these methods gave discordant results, definitive species identification was obtained by sequencing a 441 bp fragment of hsp65. Phenotypic evaluation and PRA-hsp65 were concordant for 321 (74%) isolates. These assignments were presumed to be correct. For the remaining 113 discordant isolates, definitive identification was based on sequencing a 441 bp fragment of hsp65. PRA-hsp65 identified 30 isolates with hsp65 alleles representing 13 previously unreported PRA-hsp65 patterns. Overall, species identification by PRA-hsp65 was significantly more accurate than by phenotype methods (392 (90.3%) vs. 338 (77.9%), respectively; p < .0001, Fisher's test). Among the 333 isolates representing the most common pathogenic species, PRA-hsp65 provided an incorrect result for only 1.2%. Conclusion PRA-hsp65 is a rapid and highly reliable method and deserves consideration by any clinical microbiology laboratory charged with performing species identification of NTM.
Resumo:
An introduction to bacterial polysaccharides and the methods for structural determination are described in the first two parts of the thesis. In a structural elucidation of bacterial polysaccharides NMR experiments are important as is component analysis. A short description of immunochemical methods such as enzyme immunoassays is included. Two NMR techniques used for interaction studies, trNOE and STD NMR, are also discussed. The third part of the thesis discusses and summarizes the results from the included papers. The structures of the exopolysaccharides produced by two lactic acid bacteria are determined by one- and two dimensional NMR experiments. One is a heteropolysaccharide produced by Streptococcus thermophilus and the other a homopolysaccharide produced by Propionibacterium freudenreichii. The structure of an acidic polysaccharide from a marine bacterium with two serine residues in the repeating unit is also investigated. The structural and immunological relationship between two O-antigenic polysaccharides from Escherichia coli strain 180/C3 and O5 is discussed and investigated. Finally, interaction studies of an octasaccharide derived from the Salmonella enteritidis O-antigen and a bacteriophage are described which were performed with NMR experiments.
Resumo:
The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.
Resumo:
Osteoarthritis (OA) or degenerative joint disease (DJD) is a pathology which affects the synovial joints and characterised by a focal loss of articular cartilage and subsequent bony reaction of the subcondral and marginal bone. Its etiology is best explained by a multifactorial model including: age, sex, genetic and systemic factors, other predisposing diseases and functional stress. In this study the results of the investigation of a modern identified skeletal collection will be presented. In particular, we will focus on the relationship between the presence of OA at various joints. The joint modifications have been analysed using a new methodology that allows the scoring of different degrees of expression of the features considered. Materials and Methods The sample examined comes from the Sassari identified skeletal collection (part of “Frassetto collections”). The individuals were born between 1828 and 1916 and died between 1918 and 1932. Information about sex and age is known for all the individuals. The occupation is known for 173 males and 125 females. Data concerning the occupation of the individuals indicate a preindustrial and rural society. OA has been diagnosed when eburnation (EB) or loss of morphology (LM) were present, or when at least two of the following: marginal lipping (ML), esostosis (EX) or erosion (ER), were present. For each articular surface affected a “mean score” was calculated, reflecting the “severity” of the alterations. A further “score” was calculated for each joint. In the analysis sexes and age classes were always kept separate. For the statistical analyses non parametric test were used. Results The results show there is an increase of OA with age in all the joints analyzed and in particular around 50 years and 60 years. The shoulder, the hip and the knee are the joints mainly affected with ageing while the ankle is the less affected; the correlation values confirm this result. The lesion which show the major correlation with age is the ML. In our sample males are more frequently and more severely affected by OA than females, particularly at the superior limbs, while hip and knee are similarly affected in the two sexes. Lateralization shows some positive results in particular in the right shoulder of males and in various articular surfaces especially of the superior limb of both males and females; articular surfaces and joints are quite always lateralized to the right. Occupational analyses did not show remarkable results probably because of the homogeneity of the sample; males although performing different activities are quite all employed in stressful works. No highest prevalence of knee and hip OA was found in farm-workers respect to the other males. Discussion and Conclusion In this work we propose a methodology to score the different features, necessary to diagnose OA, that allows the investigation of the severity of joint degeneration. This method is easier than the one proposed by Buikstra and Ubelaker (1994), but in the same time allows a quite detailed recording of the features. Epidemiological results can be interpreted quite simply and they are in accordance with other studies; more difficult is the interpretation of the occupational results because many questions concerning the activities performed by the individuals of the collection during their lifespan cannot be solved. Because of this, caution is suggested in the interpretation of bioarcheological specimens. With this work we hope to contribute to the discussion on the puzzling problem of the etiology of OA. The possibility of studying identified skeletons will add important data to the description of osseous features of OA, enriching the medical documentation, based on different criteria. Even if we are aware that the clinical diagnosis is different from the palaeopathological one we think our work will be useful in clarifying some epidemiological as well as pathological aspects of OA.
Resumo:
BACKGROUND: The IL23R gene has been identified as a susceptibility gene for inflammatory bowel disease (IBD) in the North American population. The aim of our study was to test this association in a large German IBD cohort and to elucidate potential interactions with other IBD genes as well as phenotypic consequences of IL23R variants. METHODS: Genomic DNA from 2670 Caucasian individuals including 833 patients with Crohn's disease (CD), 456 patients with ulcerative colitis (UC), and 1381 healthy unrelated controls was analyzed for 10 IL23R SNPs. Genotyping included the NOD2 variants p.Arg702Trp, p.Gly908Arg, and p.Leu1007fsX1008 and polymorphisms in SLC22A4/OCTN1 (1672 C-->T) and SLC22A5/OCTN2 (-207 G-->C). RESULTS: All IL23R gene variants analyzed displayed highly significant associations with CD. The strongest association was found for the SNP rs1004819 [P = 1.92x10(-11); OR 1.56; 95 % CI (1.37-1.78)]. 93.2% of the rs1004819 TT homozygous carriers as compared to 78% of CC wildtype carriers had ileal involvement [P = 0.004; OR 4.24; CI (1.46-12.34)]. The coding SNP rs11209026 (p.Arg381Gln) was protective for CD [P = 8.04x10(-8); OR 0.43; CI (0.31-0.59)]. Similar, but weaker associations were found in UC. There was no evidence for epistasis between the IL23R gene and the CD susceptibility genes CARD15 and SLC22A4/5. CONCLUSION: IL23R is an IBD susceptibility gene, but has no epistatic interaction with CARD15 and SLC22A4/5. rs1004819 is the major IL23R variant associated with CD in the German population, while the p.Arg381Gln IL23R variant is a protective marker for CD and UC.
Resumo:
Although an increasing number of studies of technological, institutional and organizational change refer to the concepts of path dependence and path creation, few attempts have been made to consider these concepts explicitly in their methodological accounts. This paper addresses this gap and contributes to the literature by developing a comprehensive methodology that originates from the concepts of path dependence and path creation – path constitution analysis (PCA) – and allows for the integration of multi-actor constellations on multiple levels of analysis within a process perspective. Based upon a longitudinal case study in the field of semiconductors, we illustrate PCA ‘in action’ as a template for other researchers and critically examine its adequacy. We conclude with implications for further path-oriented inquiries.
Resumo:
The cytochrome P450 enzyme catalysis requires two electrons transferred from NADPH-cytochrome P450 reductase (reductase) to P450. Electrostatic charge-pairing has been proposed to be one of the major forces in the interaction between P450 and reductase. In order to obtain further insight into the molecular basis for the protein interaction, I used two methods, chemical modification and specific anti-peptide antibodies, to study the involvement and importance of charged amino acid residues. Acetylation of lysine residues of P450c and P450b by acetic anhydride dramatically inhibited the reductase-supported P450c-dependent ethoxycoumarin hydroxylation activity, but P450 activity supported by cumene hydroperoxide is relatively unchanged. The modification of lysine residues of P450c and P450b did not grossly disturb the protein conformation as revealed by several spectral studies. This differential effect of lysine modification on the P450 activity in the system reconstituted with reductase versus the system supported by cumene hydroperoxide suggested an important role for P450 lysine residues in the interaction with reductase. Using $\rm\sp{14}C$-acetic anhydride, P450 lysine residues were labelled and further identified on P450c and P450b. Those lysine residues are at position 97, 271, 279, and 407 for P450c, and 251, 384, 422, 433, and 473 for P450b. Alignment of those identified lysine residues on P450c and P450b with amino acid residues identified in other studies indicated those residues reside in three major sequence areas. Modification of arginine residues of P450b by phenylglyoxal and 2, 3-butanedione have no significant effect on P450 activity either supported by NADPH and reductase or supported by cumene hydroperoxide. Further studies using $\rm\sp{14}C$-phenylglyoxal reveals that no incorporation of phenylglyoxal into P450b was found. These results demonstrated a predominant role of lysine residues of P450 in the electrostatic interaction with reductase. To understand the protein binding sites on each of P450 and reductase, I generated three anti-peptide antibodies against regions on reductase and five anti-peptide antibodies against five putative reductase binding sites on P450c. These anti-peptide antibodies were affinity purified and characterized on ELISA and by Western blot analysis. Inhibition experiments using these antibodies demonstrated that regions 109-120 and 204-220 of reductase are probably the two major binding sites for P450. The association of reductase with cytochromes P450 and cytochrome c may rely on different mechanisms. The data from experiments using anti-peptide (P450c) antibodies supports the important role of P450c lysine residues 271/279 and 458/460 in the interaction with reductase. ^
Resumo:
Individual participant data (IPD) meta-analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta-analysis (IPD-MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research question. We conducted a literature review to provide an overview of methods for performing an IPD-MA using evidence from clinical trials or non-randomized studies when investigating treatment efficacy. With this review, we aim to assist researchers in choosing the appropriate methods and provide recommendations on their implementation when planning and conducting an IPD-MA. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Resumo:
AIMS Transcatheter mitral valve replacement (TMVR) is an emerging technology with the potential to treat patients with severe mitral regurgitation at excessive risk for surgical mitral valve surgery. Multimodal imaging of the mitral valvular complex and surrounding structures will be an important component for patient selection for TMVR. Our aim was to describe and evaluate a systematic multi-slice computed tomography (MSCT) image analysis methodology that provides measurements relevant for transcatheter mitral valve replacement. METHODS AND RESULTS A systematic step-by-step measurement methodology is described for structures of the mitral valvular complex including: the mitral valve annulus, left ventricle, left atrium, papillary muscles and left ventricular outflow tract. To evaluate reproducibility, two observers applied this methodology to a retrospective series of 49 cardiac MSCT scans in patients with heart failure and significant mitral regurgitation. For each of 25 geometrical metrics, we evaluated inter-observer difference and intra-class correlation. The inter-observer difference was below 10% and the intra-class correlation was above 0.81 for measurements of critical importance in the sizing of TMVR devices: the mitral valve annulus diameters, area, perimeter, the inter-trigone distance, and the aorto-mitral angle. CONCLUSIONS MSCT can provide measurements that are important for patient selection and sizing of TMVR devices. These measurements have excellent inter-observer reproducibility in patients with functional mitral regurgitation.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
BACKGROUND The safety and efficacy of new-generation drug-eluting stents (DES) in women with multiple atherothrombotic risk (ATR) factors is unclear. METHODS AND RESULTS We pooled patient-level data for women enrolled in 26 randomized trials. Study population was categorized based on the presence or absence of high ATR, which was defined as having history of diabetes mellitus, prior percutaneous or surgical coronary revascularization, or prior myocardial infarction. The primary end point was major adverse cardiovascular events defined as a composite of all-cause mortality, myocardial infarction, or target lesion revascularization at 3 years of follow-up. Out of 10 449 women included in the pooled database, 5333 (51%) were at high ATR. Compared with women not at high ATR, those at high ATR had significantly higher risk of major adverse cardiovascular events (15.8% versus 10.6%; adjusted hazard ratio: 1.53; 95% confidence interval: 1.34-1.75; P=0.006) and all-cause mortality. In high-ATR risk women, the use of new-generation DES was associated with significantly lower risk of 3-year major adverse cardiovascular events (adjusted hazard ratio: 0.69; 95% confidence interval: 0.52-0.92) compared with early-generation DES. The benefit of new-generation DES on major adverse cardiovascular events was uniform between high-ATR and non-high-ATR women, without evidence of interaction (Pinteraction=0.14). At landmark analysis, in high-ATR women, stent thrombosis rates were comparable between DES generations in the first year, whereas between 1 and 3 years, stent thrombosis risk was lower with new-generation devices. CONCLUSIONS Use of new-generation DES even in women at high ATR is associated with a benefit consistent over 3 years of follow-up and a substantial improvement in very-late thrombotic safety.
Resumo:
Introduction. Food frequency questionnaires (FFQ) are used study the association between dietary intake and disease. An instructional video may potentially offer a low cost, practical method of dietary assessment training for participants thereby reducing recall bias in FFQs. There is little evidence in the literature of the effect of using instructional videos on FFQ-based intake. Objective. This analysis compared the reported energy and macronutrient intake of two groups that were randomized either to watch an instructional video before completing an FFQ or to view the same instructional video after completing the same FFQ. Methods. In the parent study, a diverse group of students, faculty and staff from Houston Community College were randomized to two groups, stratified by ethnicity, and completed an FFQ. The "video before" group watched an instructional video about completing the FFQ prior to answering the FFQ. The "video after" group watched the instructional video after completing the FFQ. The two groups were compared on mean daily energy (Kcal/day), fat (g/day), protein (g/day), carbohydrate (g/day) and fiber (g/day) intakes using descriptive statistics and one-way ANOVA. Demographic, height, and weight information was collected. Dietary intakes were adjusted for total energy intake before the comparative analysis. BMI and age were ruled out as potential confounders. Results. There were no significant differences between the two groups in mean daily dietary intakes of energy, total fat, protein, carbohydrates and fiber. However, a pattern of higher energy intake and lower fiber intake was reported in the group that viewed the instructional video before completing the FFQ compared to those who viewed the video after. Discussion. Analysis of the difference between reported intake of energy and macronutrients showed an overall pattern, albeit not statistically significant, of higher intake in the video before versus the video after group. Application of instructional videos for dietary assessment may require further research to address the validity of reported dietary intakes in those who are randomized to watch an instructional video before reporting diet compared to a control groups that does not view a video.^
Resumo:
Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^
Resumo:
The overall objective of this research project is to enrich geographic data with temporal and semantic components in order to significantly improve spatio-temporal analysis of geographic phenomena. To achieve this goal, we intend to establish and incorporate three new layers (structures) into the core of the Geographic Information by using mark-up languages as well as defining a set of methods and tools for enriching the system to make it able to retrieve and exploit such layers (semantic-temporal, geosemantic, and incremental spatio-temporal). Besides these layers, we also propose a set of models (temporal and spatial) and two semantic engines that make the most of the enriched geographic data. The roots of the project and its definition have been previously presented in Siabato & Manso-Callejo 2011. In this new position paper, we extend such work by delineating clearly the methodology and the foundations on which we will base to define the main components of this research: the spatial model, the temporal model, the semantic layers, and the semantic engines. By putting together the former paper and this new work we try to present a comprehensive description of the whole process, from pinpointing the basic problem to describing and assessing the solution. In this new article we just mention the methods and the background to describe how we intend to define the components and integrate them into the GI.
Resumo:
Wireless sensor networks (WSNs) have shown their potentials in various applications, which bring a lot of benefits to users from both research and industrial areas. For many setups, it is envisioned thatWSNs will consist of tens to hundreds of nodes that operate on small batteries. However due to the diversity of the deployed environments and resource constraints on radio communication, sensing ability and energy supply, it is a very challenging issue to plan optimized WSN topology and predict its performance before real deployment. During the network planning phase, the connectivity, coverage, cost, network longevity and service quality should all be considered. Therefore it requires designers coping with comprehensive and interdisciplinary knowledge, including networking, radio engineering, embedded system and so on, in order to efficiently construct a reliable WSN for any specific types of environment. Nowadays there is still a lack of the analysis and experiences to guide WSN designers to efficiently construct WSN topology successfully without many trials. Therefore, simulation is a feasible approach to the quantitative analysis of the performance of wireless sensor networks. However the existing planning algorithms and tools, to some extent, have serious limitations to practically design reliable WSN topology: Only a few of them tackle the 3D deployment issue, and an overwhelming number of works are proposed to place devices in 2D scheme. Without considering the full dimension, the impacts of environment to the performance of WSN are not completely studied, thus the values of evaluated metrics such as connectivity and sensing coverage are not sufficiently accurate to make proper decision. Even fewer planning methods model the sensing coverage and radio propagation by considering the realistic scenario where obstacles exist. Radio signals propagate with multi-path phenomenon in the real world, in which direct paths, reflected paths and diffracted paths contribute to the received signal strength. Besides, obstacles between the path of sensor and objects might block the sensing signals, thus create coverage hole in the application. None of the existing planning algorithms model the network longevity and packet delivery capability properly and practically. They often employ unilateral and unrealistic formulations. The optimization targets are often one-sided in the current works. Without comprehensive evaluation on the important metrics, the performance of planned WSNs can not be reliable and entirely optimized. Modeling of environment is usually time consuming and the cost is very high, while none of the current works figure out any method to model the 3D deployment environment efficiently and accurately. Therefore many researchers are trapped by this issue, and their algorithms can only be evaluated in the same scenario, without the possibility to test the robustness and feasibility for implementations in different environments. In this thesis, we propose a novel planning methodology and an intelligent WSN planning tool to assist WSN designers efficiently planning reliable WSNs. First of all, a new method is proposed to efficiently and automatically model the 3D indoor and outdoor environments. To the best of our knowledge, this is the first time that the advantages of image understanding algorithm are applied to automatically reconstruct 3D outdoor and indoor scenarios for signal propagation and network planning purpose. The experimental results indicate that the proposed methodology is able to accurately recognize different objects from the satellite images of the outdoor target regions and from the scanned floor plan of indoor area. Its mechanism offers users a flexibility to reconstruct different types of environment without any human interaction. Thereby it significantly reduces human efforts, cost and time spent on reconstructing a 3D geographic database and allows WSN designers concentrating on the planning issues. Secondly, an efficient ray-tracing engine is developed to accurately and practically model the radio propagation and sensing signal on the constructed 3D map. The engine contributes on efficiency and accuracy to the estimated results. By using image processing concepts, including the kd-tree space division algorithm and modified polar sweep algorithm, the rays are traced efficiently without detecting all the primitives in the scene. The radio propagation model iv is proposed, which emphasizes not only the materials of obstacles but also their locations along the signal path. The sensing signal of sensor nodes, which is sensitive to the obstacles, is benefit from the ray-tracing algorithm via obstacle detection. The performance of this modelling method is robust and accurate compared with conventional methods, and experimental results imply that this methodology is suitable for both outdoor urban scenes and indoor environments. Moreover, it can be applied to either GSM communication or ZigBee protocol by varying frequency parameter of the radio propagation model. Thirdly, WSN planning method is proposed to tackle the above mentioned challenges and efficiently deploy reliable WSNs. More metrics (connectivity, coverage, cost, lifetime, packet latency and packet drop rate) are modeled more practically compared with other works. Especially 3D ray tracing method is used to model the radio link and sensing signal which are sensitive to the obstruction of obstacles; network routing is constructed by using AODV protocol; the network longevity, packet delay and packet drop rate are obtained via simulating practical events in WSNet simulator, which to the best of our knowledge, is the first time that network simulator is involved in a planning algorithm. Moreover, a multi-objective optimization algorithm is developed to cater for the characteristics of WSNs. The capability of providing multiple optimized solutions simultaneously allows users making their own decisions accordingly, and the results are more comprehensively optimized compared with other state-of-the-art algorithms. iMOST is developed by integrating the introduced algorithms, to assist WSN designers efficiently planning reliable WSNs for different configurations. The abbreviated name iMOST stands for an Intelligent Multi-objective Optimization Sensor network planning Tool. iMOST contributes on: (1) Convenient operation with a user-friendly vision system; (2) Efficient and automatic 3D database reconstruction and fast 3D objects design for both indoor and outdoor environments; (3) It provides multiple multi-objective optimized 3D deployment solutions and allows users to configure the network properties, hence it can adapt to various WSN applications; (4) Deployment solutions in the 3D space and the corresponding evaluated performance are visually presented to users; and (5) The Node Placement Module of iMOST is available online as well as the source code of the other two rebuilt heuristics. Therefore WSN designers will be benefit from v this tool on efficiently constructing environment database, practically and efficiently planning reliable WSNs for both outdoor and indoor applications. With the open source codes, they are also able to compare their developed algorithms with ours to contribute to this academic field. Finally, solid real results are obtained for both indoor and outdoor WSN planning. Deployments have been realized for both indoor and outdoor environments based on the provided planning solutions. The measured results coincide well with the estimated results. The proposed planning algorithm is adaptable according to the WSN designer’s desirability and configuration, and it offers flexibility to plan small and large scale, indoor and outdoor 3D deployments. The thesis is organized in 7 chapters. In Chapter 1, WSN applications and motivations of this work are introduced, the state-of-the-art planning algorithms and tools are reviewed, challenges are stated out and the proposed methodology is briefly introduced. In Chapter 2, the proposed 3D environment reconstruction methodology is introduced and its performance is evaluated for both outdoor and indoor environment. The developed ray-tracing engine and proposed radio propagation modelling method are described in details in Chapter 3, their performances are evaluated in terms of computation efficiency and accuracy. Chapter 4 presents the modelling of important metrics of WSNs and the proposed multi-objective optimization planning algorithm, the performance is compared with the other state-of-the-art planning algorithms. The intelligent WSN planning tool iMOST is described in Chapter 5. RealWSN deployments are prosecuted based on the planned solutions for both indoor and outdoor scenarios, important data are measured and results are analysed in Chapter 6. Chapter 7 concludes the thesis and discusses about future works. vi Resumen en Castellano Las redes de sensores inalámbricas (en inglés Wireless Sensor Networks, WSNs) han demostrado su potencial en diversas aplicaciones que aportan una gran cantidad de beneficios para el campo de la investigación y de la industria. Para muchas configuraciones se prevé que las WSNs consistirán en decenas o cientos de nodos que funcionarán con baterías pequeñas. Sin embargo, debido a la diversidad de los ambientes para desplegar las redes y a las limitaciones de recursos en materia de comunicación de radio, capacidad de detección y suministro de energía, la planificación de la topología de la red y la predicción de su rendimiento es un tema muy difícil de tratar antes de la implementación real. Durante la fase de planificación del despliegue de la red se deben considerar aspectos como la conectividad, la cobertura, el coste, la longevidad de la red y la calidad del servicio. Por lo tanto, requiere de diseñadores con un amplio e interdisciplinario nivel de conocimiento que incluye la creación de redes, la ingeniería de radio y los sistemas embebidos entre otros, con el fin de construir de manera eficiente una WSN confiable para cualquier tipo de entorno. Hoy en día todavía hay una falta de análisis y experiencias que orienten a los diseñadores de WSN para construir las topologías WSN de manera eficiente sin realizar muchas pruebas. Por lo tanto, la simulación es un enfoque viable para el análisis cuantitativo del rendimiento de las redes de sensores inalámbricos. Sin embargo, los algoritmos y herramientas de planificación existentes tienen, en cierta medida, serias limitaciones para diseñar en la práctica una topología fiable de WSN: Sólo unos pocos abordan la cuestión del despliegue 3D mientras que existe una gran cantidad de trabajos que colocan los dispositivos en 2D. Si no se analiza la dimensión completa (3D), los efectos del entorno en el desempeño de WSN no se estudian por completo, por lo que los valores de los parámetros evaluados, como la conectividad y la cobertura de detección, no son lo suficientemente precisos para tomar la decisión correcta. Aún en menor medida los métodos de planificación modelan la cobertura de los sensores y la propagación de la señal de radio teniendo en cuenta un escenario realista donde existan obstáculos. Las señales de radio en el mundo real siguen una propagación multicamino, en la que los caminos directos, los caminos reflejados y los caminos difractados contribuyen a la intensidad de la señal recibida. Además, los obstáculos entre el recorrido del sensor y los objetos pueden bloquear las señales de detección y por lo tanto crear áreas sin cobertura en la aplicación. Ninguno de los algoritmos de planificación existentes modelan el tiempo de vida de la red y la capacidad de entrega de paquetes correctamente y prácticamente. A menudo se emplean formulaciones unilaterales y poco realistas. Los objetivos de optimización son a menudo tratados unilateralmente en los trabajos actuales. Sin una evaluación exhaustiva de los parámetros importantes, el rendimiento previsto de las redes inalámbricas de sensores no puede ser fiable y totalmente optimizado. Por lo general, el modelado del entorno conlleva mucho tiempo y tiene un coste muy alto, pero ninguno de los trabajos actuales propone algún método para modelar el entorno de despliegue 3D con eficiencia y precisión. Por lo tanto, muchos investigadores están limitados por este problema y sus algoritmos sólo se pueden evaluar en el mismo escenario, sin la posibilidad de probar la solidez y viabilidad para las implementaciones en diferentes entornos. En esta tesis, se propone una nueva metodología de planificación así como una herramienta inteligente de planificación de redes de sensores inalámbricas para ayudar a los diseñadores a planificar WSNs fiables de una manera eficiente. En primer lugar, se propone un nuevo método para modelar demanera eficiente y automática los ambientes interiores y exteriores en 3D. Según nuestros conocimientos hasta la fecha, esta es la primera vez que las ventajas del algoritmo de _image understanding_se aplican para reconstruir automáticamente los escenarios exteriores e interiores en 3D para analizar la propagación de la señal y viii la planificación de la red. Los resultados experimentales indican que la metodología propuesta es capaz de reconocer con precisión los diferentes objetos presentes en las imágenes satelitales de las regiones objetivo en el exterior y de la planta escaneada en el interior. Su mecanismo ofrece a los usuarios la flexibilidad para reconstruir los diferentes tipos de entornos sin ninguna interacción humana. De este modo se reduce considerablemente el esfuerzo humano, el coste y el tiempo invertido en la reconstrucción de una base de datos geográfica con información 3D, permitiendo así que los diseñadores se concentren en los temas de planificación. En segundo lugar, se ha desarrollado un motor de trazado de rayos (en inglés ray tracing) eficiente para modelar con precisión la propagación de la señal de radio y la señal de los sensores en el mapa 3D construido. El motor contribuye a la eficiencia y la precisión de los resultados estimados. Mediante el uso de los conceptos de procesamiento de imágenes, incluyendo el algoritmo del árbol kd para la división del espacio y el algoritmo _polar sweep_modificado, los rayos se trazan de manera eficiente sin la detección de todas las primitivas en la escena. El modelo de propagación de radio que se propone no sólo considera los materiales de los obstáculos, sino también su ubicación a lo largo de la ruta de señal. La señal de los sensores de los nodos, que es sensible a los obstáculos, se ve beneficiada por la detección de objetos llevada a cabo por el algoritmo de trazado de rayos. El rendimiento de este método de modelado es robusto y preciso en comparación con los métodos convencionales, y los resultados experimentales indican que esta metodología es adecuada tanto para escenas urbanas al aire libre como para ambientes interiores. Por otra parte, se puede aplicar a cualquier comunicación GSM o protocolo ZigBee mediante la variación de la frecuencia del modelo de propagación de radio. En tercer lugar, se propone un método de planificación de WSNs para hacer frente a los desafíos mencionados anteriormente y desplegar redes de sensores fiables de manera eficiente. Se modelan más parámetros (conectividad, cobertura, coste, tiempo de vida, la latencia de paquetes y tasa de caída de paquetes) en comparación con otros trabajos. Especialmente el método de trazado de rayos 3D se utiliza para modelar el enlace de radio y señal de los sensores que son sensibles a la obstrucción de obstáculos; el enrutamiento de la red se construye utilizando el protocolo AODV; la longevidad de la red, retardo de paquetes ix y tasa de abandono de paquetes se obtienen a través de la simulación de eventos prácticos en el simulador WSNet, y según nuestros conocimientos hasta la fecha, es la primera vez que simulador de red está implicado en un algoritmo de planificación. Por otra parte, se ha desarrollado un algoritmo de optimización multi-objetivo para satisfacer las características de las redes inalámbricas de sensores. La capacidad de proporcionar múltiples soluciones optimizadas de forma simultánea permite a los usuarios tomar sus propias decisiones en consecuencia, obteniendo mejores resultados en comparación con otros algoritmos del estado del arte. iMOST se desarrolla mediante la integración de los algoritmos presentados, para ayudar de forma eficiente a los diseñadores en la planificación de WSNs fiables para diferentes configuraciones. El nombre abreviado iMOST (Intelligent Multi-objective Optimization Sensor network planning Tool) representa una herramienta inteligente de planificación de redes de sensores con optimización multi-objetivo. iMOST contribuye en: (1) Operación conveniente con una interfaz de fácil uso, (2) Reconstrucción eficiente y automática de una base de datos con información 3D y diseño rápido de objetos 3D para ambientes interiores y exteriores, (3) Proporciona varias soluciones de despliegue optimizadas para los multi-objetivo en 3D y permite a los usuarios configurar las propiedades de red, por lo que puede adaptarse a diversas aplicaciones de WSN, (4) las soluciones de implementación en el espacio 3D y el correspondiente rendimiento evaluado se presentan visualmente a los usuarios, y (5) El _Node Placement Module_de iMOST está disponible en línea, así como el código fuente de las otras dos heurísticas de planificación. Por lo tanto los diseñadores WSN se beneficiarán de esta herramienta para la construcción eficiente de la base de datos con información del entorno, la planificación práctica y eficiente de WSNs fiables tanto para aplicaciones interiores y exteriores. Con los códigos fuente abiertos, son capaces de comparar sus algoritmos desarrollados con los nuestros para contribuir a este campo académico. Por último, se obtienen resultados reales sólidos tanto para la planificación de WSN en interiores y exteriores. Los despliegues se han realizado tanto para ambientes de interior y como para ambientes de exterior utilizando las soluciones de planificación propuestas. Los resultados medidos coinciden en gran medida con los resultados estimados. El algoritmo de planificación x propuesto se adapta convenientemente al deiseño de redes de sensores inalámbricas, y ofrece flexibilidad para planificar los despliegues 3D a pequeña y gran escala tanto en interiores como en exteriores. La tesis se estructura en 7 capítulos. En el Capítulo 1, se presentan las aplicaciones de WSN y motivaciones de este trabajo, se revisan los algoritmos y herramientas de planificación del estado del arte, se presentan los retos y se describe brevemente la metodología propuesta. En el Capítulo 2, se presenta la metodología de reconstrucción de entornos 3D propuesta y su rendimiento es evaluado tanto para espacios exteriores como para espacios interiores. El motor de trazado de rayos desarrollado y el método de modelado de propagación de radio propuesto se describen en detalle en el Capítulo 3, evaluándose en términos de eficiencia computacional y precisión. En el Capítulo 4 se presenta el modelado de los parámetros importantes de las WSNs y el algoritmo de planificación de optimización multi-objetivo propuesto, el rendimiento se compara con los otros algoritmos de planificación descritos en el estado del arte. La herramienta inteligente de planificación de redes de sensores inalámbricas, iMOST, se describe en el Capítulo 5. En el Capítulo 6 se llevan a cabo despliegues reales de acuerdo a las soluciones previstas para los escenarios interiores y exteriores, se miden los datos importantes y se analizan los resultados. En el Capítulo 7 se concluye la tesis y se discute acerca de los trabajos futuros.