923 resultados para An introduction to visual research methods in tourism
Resumo:
In this work, different methods to estimate the value of thin film residual stresses using instrumented indentation data were analyzed. This study considered procedures proposed in the literature, as well as a modification on one of these methods and a new approach based on the effect of residual stress on the value of hardness calculated via the Oliver and Pharr method. The analysis of these methods was centered on an axisymmetric two-dimensional finite element model, which was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. Simulations were conducted varying the level of film residual stress, film strain hardening exponent, film yield strength, and film Poisson's ratio. Different ratios of maximum penetration depth h(max) over film thickness t were also considered, including h/t = 0.04, for which the contribution of the substrate in the mechanical response of the system is not significant. Residual stresses were then calculated following the procedures mentioned above and compared with the values used as input in the numerical simulations. In general, results indicate the difference that each method provides with respect to the input values depends on the conditions studied. The method by Suresh and Giannakopoulos consistently overestimated the values when stresses were compressive. The method provided by Wang et al. has shown less dependence on h/t than the others.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This report focuses on the 2005 Annual meeting held in Caxambu, Minas Gerais, Brazil that was convened and organized by the Brazilian Society of Protozoology http://www.sbpz.org.br/. This is an annual event and details of these meetings can be found on the Society's website. Within the space available it has been impossible to cover all the important and fascinating contributions and what is presented are our personal views of the meetings scientific highlights and new developments. The contents undoubtedly reflect each author's scientific interests and expertise. Fuller details of the round tables, seminars and posters can be consulted on line at http://www.sbpz.org.br/livroderesumos2005.php.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
In the recent years TNFRSF13B coding variants have been implicated by clinical genetics studies in Common Variable Immunodeficiency (CVID), the most common clinically relevant primary immunodeficiency in individuals of European ancestry, but their functional effects in relation to the development of the disease have not been entirely established. To examine the potential contribution of such variants to CVID, the more comprehensive perspective of an evolutionary approach was applied in this study, underling the belief that evolutionary genetics methods can play a role in dissecting the origin, causes and diffusion of human diseases, representing a powerful tool also in human health research. For this purpose, TNFRSF13B coding region was sequenced in 451 healthy individuals belonging to 26 worldwide populations, in addition to 96 control, 77 CVID and 38 Selective IgA Deficiency (IgAD) individuals from Italy, leading to the first achievement of a global picture of TNFRSF13B nucleotide diversity and haplotype structure and making suggestion of its evolutionary history possible. A slow rate of evolution, within our species and when compared to the chimpanzee, low levels of genetic diversity geographical structure and the absence of recent population specific selective pressures were observed for the examined genomic region, suggesting that geographical distribution of its variability is more plausibly related to its involvement also in innate immunity rather than in adaptive immunity only. This, together with the extremely subtle disease/healthy samples differences observed, suggests that CVID might be more likely related to still unknown environmental and genetic factors, rather than to the nature of TNFRSF13B variants only.
Resumo:
Here I will focus on three main topics that best address and include the projects I have been working in during my three year PhD period that I have spent in different research laboratories addressing both computationally and practically important problems all related to modern molecular genomics. The first topic is the use of livestock species (pigs) as a model of obesity, a complex human dysfunction. My efforts here concern the detection and annotation of Single Nucleotide Polymorphisms. I developed a pipeline for mining human and porcine sequences. Starting from a set of human genes related with obesity the platform returns a list of annotated porcine SNPs extracted from a new set of potential obesity-genes. 565 of these SNPs were analyzed on an Illumina chip to test the involvement in obesity on a population composed by more than 500 pigs. Results will be discussed. All the computational analysis and experiments were done in collaboration with the Biocomputing group and Dr.Luca Fontanesi, respectively, under the direction of prof. Rita Casadio at the Bologna University, Italy. The second topic concerns developing a methodology, based on Factor Analysis, to simultaneously mine information from different levels of biological organization. With specific test cases we develop models of the complexity of the mRNA-miRNA molecular interaction in brain tumors measured indirectly by microarray and quantitative PCR. This work was done under the supervision of Prof. Christine Nardini, at the “CAS-MPG Partner Institute for Computational Biology” of Shangai, China (co-founded by the Max Planck Society and the Chinese Academy of Sciences jointly) The third topic concerns the development of a new method to overcome the variety of PCR technologies routinely adopted to characterize unknown flanking DNA regions of a viral integration locus of the human genome after clinical gene therapy. This new method is entirely based on next generation sequencing and it reduces the time required to detect insertion sites, decreasing the complexity of the procedure. This work was done in collaboration with the group of Dr. Manfred Schmidt at the Nationales Centrum für Tumorerkrankungen (Heidelberg, Germany) supervised by Dr. Annette Deichmann and Dr. Ali Nowrouzi. Furthermore I add as an Appendix the description of a R package for gene network reconstruction that I helped to develop for scientific usage (http://www.bioconductor.org/help/bioc-views/release/bioc/html/BUS.html).
Resumo:
Introduction: Small animal models are widely used in basic research. However, experimental systems requiring extracorporeal circuits are frequently confronted with limitations related to equipment size. This is particularly true for oxygenators in systems with limited volumes. Thus we aimed to develop and validate an ultra mini-oxygenator for low-volume, buffer-perfused systems. Methods: We have manufactured a series of ultra mini-oxygenators with approximately 175 aligned, microporous, polypropylene hollow fibers contained inside a shell, which is sealed at each of the two extremities to isolate perfusate and gas compartments. With this construction, gas passes through hollow fibers, while perfusate circulates around fibers. Performance of ultra mini-oxygenators (oxygen partial pressure (PO2 ), gas and perfusate flow, perfusate pressure and temperature drop) were assessed with modified Krebs-Henseleit buffer in an in vitro perfusion circuit and an ex vivo rat heart preparation. Results: Mean priming volume of ultra mini-oxygenators was 1.2±0.5 mL and, on average, 86±6% of fibers were open (n=17). In vitro, effective oxygenation (PO2=400-500 mmHg) was achieved for all flow rates up to 50 mL/min and remained stable for at least 2 hours (n=5). Oxygenation was also effective and stable (PO2=456±40 mmHg) in the isolated heart preparation for at least 60 minutes ("venous" PO2=151±11 mmHg; n=5). Conclusions: We have established a reproducible procedure for fabrication of ultra mini-oxygenators, which provide reliable and stable oxygenation for at least 60-120 min. These oxygenators are especially attractive for pre-clinical protocols using small, rather than large, animals.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Resumo:
Latent class regression models are useful tools for assessing associations between covariates and latent variables. However, evaluation of key model assumptions cannot be performed using methods from standard regression models due to the unobserved nature of latent outcome variables. This paper presents graphical diagnostic tools to evaluate whether or not latent class regression models adhere to standard assumptions of the model: conditional independence and non-differential measurement. An integral part of these methods is the use of a Markov Chain Monte Carlo estimation procedure. Unlike standard maximum likelihood implementations for latent class regression model estimation, the MCMC approach allows us to calculate posterior distributions and point estimates of any functions of parameters. It is this convenience that allows us to provide the diagnostic methods that we introduce. As a motivating example we present an analysis focusing on the association between depression and socioeconomic status, using data from the Epidemiologic Catchment Area study. We consider a latent class regression analysis investigating the association between depression and socioeconomic status measures, where the latent variable depression is regressed on education and income indicators, in addition to age, gender, and marital status variables. While the fitted latent class regression model yields interesting results, the model parameters are found to be invalid due to the violation of model assumptions. The violation of these assumptions is clearly identified by the presented diagnostic plots. These methods can be applied to standard latent class and latent class regression models, and the general principle can be extended to evaluate model assumptions in other types of models.
Resumo:
We performed a Rey visual design learning test (RVDLT) in 17 subjects and measured intervoxel coherence (IC) by DTI as an indication of connectivity to investigate if visual memory performance would depend on white matter structure in healthy persons. IC considers the orientation of the adjacent voxels and has a better signal-to-noise ratio than the commonly used fractional anisotropy index. Voxel-based t-test analysis of the IC values was used to identify neighboring voxel clusters with significant differences between 7 low and 10 high test performers. We detected 9 circumscribed significant clusters (p< .01) with lower IC values in low performers than in high performers, with centers of gravity located in left and right superior temporal region, corpus callosum, left superior longitudinal fascicle, and left optic radiation. Using non-parametric correlation analysis, IC and memory performance were significantly correlated in each of the 9 clusters (r< .61 to r< .81; df=15, p< .01 to p< .0001). The findings provide in vivo evidence for the contribution of white matter structure to visual memory in healthy people.
Resumo:
Introduction Current empirical findings indicate that the efficiency of decision making (both for experts and near-experts) in simple situations is reduced under increased stress (Wilson, 2008). Explaining the phenomenon, the Attentional Control Theory (ACT, Eysenck et al., 2007) postulates an impairment of attentional processes resulting in a less efficient processing of visual information. From a practitioner’s perspective, it would be highly relevant to know whether this phenomenon can also be found in complex sport situations like in the game of football. Consequently, in the present study, decision making of football players was examined under regular vs. increased anxiety conditions. Methods 22 participants (11 experts and 11 near-experts) viewed 24 complex football situations (counterbalanced) in two anxiety conditions from the perspective of the last defender. They had to decide as fast and accurate as possible on the next action of the player in possession (options: shot on goal, dribble or pass to a designated team member) for equal numbers of trials in a near and far distance condition (based on the position of the player in possession). Anxiety was manipulated via a competitive environment, false feedback as well as ego threats. Decision time and accuracy, gaze behaviour (e.g., fixation duration on different locations) as well as state anxiety and mental effort were used as dependent variables and analysed with 2 (expertise) x 2 (distance) x 2 (anxiety) ANOVAs with repeated measures on the last two factors. Besides expertise differences, it was hypothesised that, based on ACT, increased anxiety reduces performance efficiency and impairs gaze behaviour. Results and Discussion Anxiety was manipulated successfully, indicated by higher ratings of state anxiety, F(1, 20) = 13.13, p < .01, ηp2 = .40. Besides expertise differences in decision making – experts responded faster, F(1, 20) = 11.32, p < .01, ηp2 = .36, and more accurate, F(1,20) = 23.93, p < .01, ηp2 = .55, than near-experts – decision time, F(1, 20) = 9.29, p < .01, ηp2 = .32, and mental effort, F(1, 20) = 7.33, p = .01, ηp2 = .27, increased for both groups in the high anxiety condition. This result confirms the ACT assumption that processing efficiency is reduced when being anxious. Replicating earlier findings, a significant expertise by distance interaction could be observed, F(1, 18) = 18.53, p < .01, ηp2 = .51), with experts fixating longer on the player in possession or the ball in the near distance and longer on other opponents, teammates and free space in the far distance condition. This shows that experts are able to adjust their gaze behaviour to affordances of displayed playing patterns. Additionally, a three way interaction was found, F(1, 18) = 7.37 p = .01, ηp2 = .29, revealing that experts utilised a reduced number of fixations in the far distance condition when being anxious indicating a reduced ability to pick up visual information. Since especially the visual search behaviour of experts was impaired, the ACT prediction that particularly top-down processes are affected by anxiety could be confirmed. Taken together, the results show that sports performance is negatively influenced by anxiety since longer response times, higher mental effort and inefficient visual search behaviour were observed. From a practitioner’s perspective, this finding might suggest preferring (implicit) perceptual cognitive training; however, this recommendation needs to be empirically supported in intervention studies. References: Eysenck, M. W., Derakshan, N., Santos, R., & Calvo, M. G. (2007). Anxiety and cognitive performance: Attentional control theory. Emotion, 7, 336-353. Wilson, M. (2008). From processing efficiency to attentional control: A mechanistic account of the anxiety-performance relationship. Int. Review of Sport and Exercise Psychology, 1, 184-201.
Resumo:
INTRODUCTION Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. METHODS AND MATERIALS Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. RESULTS Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm(3)) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm(3)) (P<0.001). CONCLUSIONS 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA.
Resumo:
INTRODUCTION This paper focuses exclusively on experimental models with ultra high dilutions (i.e. beyond 10(-23)) that have been submitted to replication scrutiny. It updates previous surveys, considers suggestions made by the research community and compares the state of replication in 1994 with that in 2015. METHODS Following literature research, biochemical, immunological, botanical, cell biological and zoological studies on ultra high dilutions (potencies) were included. Reports were grouped into initial studies, laboratory-internal, multicentre and external replications. Repetition could yield either comparable, or zero, or opposite results. The null-hypothesis was that test and control groups would not be distinguishable (zero effect). RESULTS A total of 126 studies were found. From these, 28 were initial studies. When all 98 replicative studies were considered, 70.4% (i.e. 69) reported a result comparable to that of the initial study, 20.4% (20) zero effect and 9.2% (9) an opposite result. Both for the studies until 1994 and the studies 1995-2015 the null-hypothesis (dominance of zero results) should be rejected. Furthermore, the odds of finding a comparable result are generally higher than of finding an opposite result. Although this is true for all three types of replication studies, the fraction of comparable studies diminishes from laboratory-internal (total 82.9%) to multicentre (total 75%) to external (total 48.3%), while the fraction of opposite results was 4.9%, 10.7% and 13.8%. Furthermore, it became obvious that the probability of an external replication producing comparable results is bigger for models that had already been further scrutinized by the initial researchers. CONCLUSIONS We found 28 experimental models which underwent replication. In total, 24 models were replicated with comparable results, 12 models with zero effect, and 6 models with opposite results. Five models were externally reproduced with comparable results. We encourage further replications of studies in order to learn more about the model systems used.