765 resultados para Concordance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE - To compare posterior vitreous chamber shape in myopia to that in emmetropia. METHODS - Both eyes of 55 adult subjects were studied, 27 with emmetropia (MSE =-0.55; <+0.75D; mean +0.09 ±0.36D) and 28 with myopia (MSE -5.87 ±2.31D). Cycloplegic refraction was measured with a Shin Nippon autorefractor and anterior chamber depth and axial length with a Zeiss IOLMaster. Posterior vitreous chamber shapes were determined from T2-weighted MRI (3-Tesla) using procedures previously reported by our laboratory. 3-D surface model coordinates were assigned to nasal, temporal, superior and inferior quadrants and plotted in 2-D to illustrate the composite shape of respective quadrants posterior to the second nodal point. Spherical analogues of chamber shape were constructed to compare relative sphericity between refractive groups and quadrants. RESULTS - Differences in shape occurred in the region posterior to points of maximum globe width and were thus in general accord with an equatorial model of myopic expansion. Shape in emmetropia is categorised distinctly as that of an oblate ellipse and in myopia as an oblate ellipse of significantly less degree such that it approximates to a sphere. There was concordance between shape and retinotopic projection of respective quadrants into right, left, superior and inferior visual fields. CONCLUSIONS - The transition in shape from oblate ellipse to sphere with axial elongation supports the hypothesis that myopia may be a consequence of equatorial restriction associated with biomechanical anomalies of the ciliary apparatus. The synchronisation of quadrant shapes with retinotopic projection suggests that binocular growth is coordinated by processes that operate beyond the optic chiasm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Oral anticoagulation (OAC) reduces stroke risk in patients with atrial fibrillation (AF) however it is often underutilized and sometimes refused by patients. This programme of work included a meta-synthesis and two inter-linking studies aiming to explore patients’ and physicians’ experiences of AF and OAC. Methods: A meta-synthesis of qualitative evidence was conducted which informed the empirical work. Semi-structured individual interviews were utilised. Study 1: Three AF patient sub-groups were interviewed; accepted (n=4), refused (n=4), or discontinued (n=3) warfarin. Study 2: Four physician sub-groups (n=4 each group) prescribing OAC to AF patients were interviewed: consultant cardiologists, consultant general physicians, general practitioners and cardiology registrars. Data was analysed using interpretative phenomenological analysis. Results: Study 1: Three over-arching themes comprised patients’ experiences: (1) the initial consultation, (2) life after the consultation, and (3) patients’ reflections. Patients commented on the relief and reassurance experienced during the consultation but they perceived the decision making process mostly led by the physician. Lack of education and take-home materials distributed during the initial consultation was highlighted. Patients who had experienced stroke themselves or were caregivers, were more receptive to education aimed towards stroke risk reduction rather than bleeding risk. Warfarin monitoring was challenging for patients, however some patients perceived it as beneficial as it served to enhance patient-physician relationship. Study 2: Two over-arching themes emerged from physicians’ experiences: (1) communicating information and (2) challenges with OAC prescription for AF. Physicians’ approach to the consultation style shifted through a continuum of compliance-adherence-concordance during the consultation. They aimed for concordance, however challenges such as time and the perceived patient trust in them as the expert, led to physicians adopting a paternalistic approach. Physicians also pointed out challenges associated with guideline adherence and the need to adopt a multi-disciplinary approach, where other health professionals could provide on-going education. Conclusion: This programme of work has illustrated the benefit of taking an in depth phenomenological approach to understanding the lived experience of the physician-patient consultation. Together with the meta-synthesis, this work has strengthened the evidence base and demonstrated that there is a need to target patients' and physicians' ability to communicate with each other in a comprehensible way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate if magnetoencephalography (MEG) can identify implantation sites for intracranial recordings (IR). Method: Two groups of 12 patients assessed for surgery with IR with and without MEG were compared (MEG and control groups). In the control group, non-invasive presurgical assessment without MEG suggested clear hypotheses for implantation. In the MEG group, non-invasive assessment was inconclusive, and MEG was used to identify implantation sites. Both groups were matched for implantation type. The success of implantation was defined by findings in IR: a) Focal seizure onset; b)Unilateral focal abnormal responses to single pulse electrical stimulation(SPES); and c) Concordance between a) and b). Results: In all MEG patients, at least one virtual MEG electrode generated suitable hypotheses for the location of implantations. The proportion of patients showing focal seizure onset restricted to one hemisphere was similar in control and MEG groups (6/12 vs. 11/12, Fisher’s exact test,p = 0.0686). The proportion of patients showing unilateral responses to SPES was lower in the control than in the MEG group (7/12 vs. 12/12,p = 0.0373). Conclusion: The MEG group showed similar or higher incidence of successful implantations than controls.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To evaluate the implementation of the National Health Service (NHS) Health Check programme in one area of England from the perspective of general practitioners (GPs). DESIGN: A qualitative exploratory study was conducted with GPs and other healthcare professionals involved in delivering the NHS Health Check and with patients. This paper reports the experience of GPs and focuses on the management of the Heath Check programme in primary care. SETTING: Primary care surgeries in the Heart of Birmingham region (now under the auspices of the Birmingham Cross City Clinical Commissioning Group) were invited to take part in the larger scale evaluation. This study focuses on a subset of those surgeries whose GPs were willing to participate. PARTICIPANTS: 9 GPs from different practices volunteered. GPs served an ethnically diverse region with areas of socioeconomic deprivation. Ethnicities of participant GPs included South Asian, South Asian British, white, black British and Chinese. METHODS: Individual semistructured interviews were conducted with GPs face to face or via telephone. Thematic analysis was used to analyse verbatim transcripts. RESULTS: Themes were generated which represent GPs' experiences of managing the NHS Health Check: primary care as a commercial enterprise; 'buy in' to concordance in preventive healthcare; following protocol and support provision. These themes represent the key issues raised by GPs. They reveal variability in the implementation of NHS Health Checks. GPs also need support in allocating resources to the Health Check including training on how to conduct checks in a concordant (or collaborative) way. CONCLUSIONS: The variability observed in this small-scale evaluation corroborates existing findings suggesting a need for more standardisation. Further large-scale research is needed to determine how that could be achieved. Work needs to be done to further develop a concordant approach to lifestyle advice which involves tailored individual goal setting rather than a paternalistic advice-giving model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A cikk fő célja a magyarországi termelővállalatok szolgálatosodási folyamatának jellemzése a versenyképesség- kutatás adatainak felhasználásával. Relevanciáját az adja, hogy a nemzetközi szakirodalom a jelenséget inkább a fejlett országokban vizsgálja. A szerzők kutatásukban körüljárták, hogy a szolgálatosodás nemzetközi szakirodalom által elfogadott jellemzői mennyiben tekinthetők érvényesnek a magyarországi viszonyok közepette. A szolgálatosodást három szinten vizsgálták: stratégia, működés és pénzügyi eredményesség. Vizsgálatuk eredményei azt mutatják, hogy a nemzetközi szakirodalommal összhangban a szolgáltatások stratégiai szerepe még alacsony a magyarországi termelővállalatoknál a termelési stratégia többi versenycéljához képest. Ugyanakkor mintájukban már megtalálhatók azok a vállalatok, amelyek mind stratégiai, mind működési szinten jelentősebb hangsúlyt fektetnek a szolgáltatások nyújtására. Fontos eredmény ugyanakkor, hogy a szolgálatosodástól elvárt pénzügyi előnyök még ezeknél a vállalatoknál sem realizálódnak. _______ The main purpose of the authors is to describe the servitization process of Hungarian manufacturing companies based on data of the Competitiveness research. The relevance of this article is given by the fact that international literature analyzes this phenomenon mainly in developed countries. In the present paper they analyze to what extent characteristics of the servitization process, generally accepted in the literature, are also applicable in a developing macroenvironment, i.e. Hungary. They approach servitization from three different perspectives: strategy, operations and financial payoffs. Results of their analysis show that, in concordance with the literature, the strategic role of services at the Hungarian manufacturing companies is still lower than other manufacturing competitive priorities. However, their sample contains a number manufacturing companies that place a greater emphasis on offering services both on strategic and operational level. An important conclusion of their study is that in case of these companies financial benefits attributable to higher levels of servitization do not yet seem to materialize.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examined how the themes of environmental sustainability are evident in the national, state and local standards that guide k–12 science curriculum. The study applied the principles of content analysis within the framework of an ecological paradigm. In education, an ecological paradigm focuses on students' use of a holistic lens to view and understand material. The intent of this study was to analyze the seventh grade science content standards at the national, state, and local textbook levels to determine how and the extent to which each of the five themes of environmental sustainability are presented in the language of each text. The themes are: (a) Climate Change Indicators, (b) Biodiversity, (c) Human Population Density, (d) Impact and Presence of Environmental Pollution, (e) Earth as a Closed System. The research study offers practical insight on using a method of content analysis to locate keywords of environmental sustainability in the three texts and determine if the context of each term relates to this ecological paradigm. Using a concordance program, the researcher identified the frequency and context of each vocabulary item associated with these themes. Nine chi squares were run to determine if there were differences in content between the national and state standards and the textbook. Within each level chi squares were also run to determine if there were differences between the appearance of content knowledge and skill words. Results indicate that there is a lack of agreement between levels that is significant p < .01. A discussion of these results in relation to curriculum development and standardized assessments followed. The study found that at the national and state levels, there is a lack of articulation of the goals of environmental sustainability or an ecological paradigm. With respect to the science textbook, a greater number of keywords were present; however, the context of many of these keywords did not align with the discourse of an ecological paradigm. Further, the environmental sustainability themes present in the textbook were limited to the last four chapters of the text. Additional research is recommended to determine whether this situation also exists in other settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The soil heat flux and soil thermal diffusivity are important components of the surface energy balance, especially in ar id and semi-arid regions. The obj ective of this work was to carry out to estimate the soil heat flux from th e soil temperature measured at a single depth, based on the half-order time derivative met hod proposed by Wang and Bras (1999), and to establish a method capable of es timating the thermal diffusivity of the soil, based on the half order derivative, from the temporal series of soil temperature at two depths. The results obtained in the estimates of soil heat flux were compared with the values of soil heat flux measured through flux plates, and the thermal di ffusivity estimated was compared with the measurements carried out in situ. The results obtained showed excellent concordance between the estimated and measured soil heat flux, with correlation (r), coeffici ent of determination (R 2 ) and standard error (W/m 2 ) of: r = 0.99093, R 2 = 0.98194 and error = 2.56 (W/m 2 ) for estimated period of 10 days; r = 0,99069, R 2 = 0,98147 and error = 2.59 (W/m 2 ) for estimated period of 30 days; and r = 0,98974, R 2 = 0,97958 and error = 2.77 (W/m 2 ) for estimated period of 120 days. The values of thermal di ffusivity estimated by the proposed method showed to be coherent and consis tent with in situ measured va lues, and with the values found in the literature usi ng conventional methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In educational observed difficulty in train teachers to meet the medium and higher education needs, and one reason for this is the different experiences in the training of educators in relation to those found in the classroom. So often arise criticisms related to relevance and efficiency of degree courses, as regards the performance of its natural mission, which weakens the teacher training. Thus, improving the quality of education is very dependent on the initiatives of teachers, creating teaching alternatives to strengthen their performance in school. From this reflection, it is concluded that teacher training needs new educational proposals that qualify, and so can promote the formation of his students more adequately. Among the educational proposals as alternatives to initial teacher training may use the scientific theater (TC). Considering this possibility, this work has been proposed as investigate and discuss the influence of TC combined with experimentation in the initial training of future teachers in Chemistry who participate in the Groups Fanatics chemistry Theatre (UERN) and Chemistry on Stage (UFRN). Therefore, there was, in a first stage, theatrical essays based on the theater of the oppressed, and written dramaturgical scripts, a collaborative proposal. To incorporate experimentation in chemistry to theater rehearsals, there was a systematic literature search and after content analysis, were selected categories, materials and reagents easily accessible, easy procedures and implementation with low risk of accidents and easy care chemical waste. In the second part we identified: a) the beliefs of student teachers in the use of TC ally trial for initial training of chemistry teachers; b) the influence of TC ally to trial on learning of chemical concepts of high school students who attended the shows; c) the reasons for using or not TC ally to trial by chemistry teachers who participated in the TC group and currently work in the classroom. In this study, questionnaires and interviews were used, compounds, respectively, by a Likert scale and open questions. Quantitative data were analyzed by classical statistics the media, using as centrality measures the average, the concordance argument and the average deviation. Qualitative data were discussed according to content analysis, with categories that emerged from reading the answers. These analyzes concluded that the licensees have a positive view on the use of scientific theater for disclosure of the chemical for use in the learning of chemical concepts, pedagogical and disciplinary knowledge, and to form promotion strategy for research and extension at the University. They credit improvements in their initial training on the use of scientific theater combined with experimentation. The TC provides motivation for the construction of conceptual thinking in an informal way of chemical communication, allowing the student to expand their knowledge, not only favoring the phenomenological approach, but also the construction of chemical knowledge and the internalization of scientific concepts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The variability / climate change has generated great concern worldwide, is one of the major issues as global warming, which can is affecting the availability of water resources in irrigated perimeters. In the semiarid region of Northeastern Brazil it is known that there is a predominance of drought, but it is not enough known about trends in climate series of joint water loss by evaporation and transpiration (evapotranspiration). Therefore, this study aimed to analyze whether there is increase and / or decrease evidence in the regime of reference evapotranspiration (ETo), for the monthly, annual and interdecadal scales in irrigated polo towns of Juazeiro, BA (9 ° 24'S, 40 ° 26'W and 375,5m) and Petrolina, PE (09 ° 09'S, 40 ° 22'W and 376m), which is the main analysis objective. The daily meteorological data were provided by EMBRAPA Semiárido for the period from 01.01.1976 to 31.12.2014, estimated the daily ETo using the standard method of Penman-Monteith (EToPM) parameterized by Smith (1991). Other methods of more simplified estimatives were calculated and compared to EToPM, as the ones following: Solar Radiation (EToRS), Linacre (EToL), Hargreaves and Samani (EToHS) and the method of Class A pan (EToTCA). The main statistical analysis were non-parametric tests of homogeneity (Run), trend (Mann-kendall), magnitude of the trend (Sen) and early trend detection (Mann-Whitney). The statistical significance adopted was 5 and / or 1%. The Analysis of Variance - ANOVA was used to detect if there is a significant difference in mean interdecadal mean. For comparison between the methods of ETo, it were used the correlation test (r), the Student t test and Tukey levels of 5% significance. Finally, statistics Willmott et al. (1985) statistics was used to evaluate the concordance index and performance of simplified methods compared to the standard method. It obtained as main results that there was a decrease in the time series of EToPM in irrigated areas of Juazeiro, BA and Petrolina, PE, significant respectively at 1 and 5%, with an annual magnitude of -14.5 mm (Juazeiro) and -7.7 mm (Petrolina) and early trend in 1996. The methods which had better for better agreement with EToPM were EToRS with very good performance, in both locations, followed by the method of EToL with good performance (Juazeiro) and median (Petrolina). EToHS had the worst performance (bad) for both locations. It is suggested that this decrease of EToPM can be associated with the increase in irrigated agricultural areas and the construction of Sobradinho lake upstream of the perimeters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJETIVO: Estimar la prevalencia y la extensión de la caries radicular en la población adulta y anciana de Brasil. MÉTODOS: A partir de los datos de la Investigación Nacional de Salud Bucal (SBBrasil 2010) se examinaron 9.564 adultos y 7.509 ancianos en domicilios de las 26 capitales y en el Distrito Federal y de 150 municipios del interior de cada macro región. Se implementaron criterios de diagnóstico establecidos por la Organización Mundial de la Salud. Para estudio de la prevalencia y de extensión se utilizó el índice de caries radicular y el índice de raíces cariadas y obturadas. RESULTADOS: La prevalencia de caries radicular fue de 16,7% en los adultos y 13,6% en los ancianos; el índice de raíces cariadas y obturadas fue de 0,42 y 0,32 respectivamente, siendo la mayor parte compuesta por caries no tratadas. Se observaron diferencias en la experiencia de caries radicular entre capitales y macro regiones, con valores mayores en capitales del Norte y Noreste. El índice de caries radicular en los adultos varió de 1,4% en Aracaju (SE) a 15,1% en Salvador (BA) y en los ancianos de 3,5% en Porto Velho (RO) a 29,9% en Palmas (TO). Se verificó incremento de caries radicular con la edad y mayor expresividad de la enfermedad en hombres de ambos grupos etarios. CONCLUSIONES: Se identificó una gran variación de la prevalencia y extensión de la caries radicular entre y dentro de las regiones de Brasil, tanto en adultos como en ancianos, y la mayor parte de la caries radicular se encuentra no tratada. Se recomienda la incorporación de este agravio al sistema de vigilancia en salud bucal, debido a su tendencia creciente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A 0.25 m US Naval Electronics Lab box corer was used to take replicate samples from an oligotrophic bottom under the North Pacific Central Water Mass (approx. 28 degrees N, 155 degrees W). The bottom is a red clay with manganese nodules at a depth of 5500-5800 m. Macrofaunal density ranges from 84 to 160 individuals per m super(2) and is therefore much the same as in Northwest Atlantic Gyre waters. Of the macrofaunal taxa, polychaetes dominate (55 per cent), followed by tanaids (18 per cent), bivalves (7 per cent), and isopods (6 per cent). Meiofaunal taxa were only partially retained by the 297 micrometer screen used in washing. Even then, they are 1.5-3.9 times as abundant as the microfaunal taxa, with nematodes being numerically dominant by far. Foraminifera seem to comprise an important portion of the community, but could not be assessed accurately because of the inability to discriminate living and dead tests. Remains of what are probably xenophyophoridans are also very important, but offer the same problem. Faunal diversity is extremely high, with deposit feeders comprising the overwhelming majority. Most spp are rare, being encountered only once. The distributions of only 3 spp show any significant deviation from randomness. The polychaete fauna from box cores collected from 90 m to the north was not significantly different from that of the principal study locality. Concordance appeared at several taxonomic levels, from spp through microfaunal/ meiofaunal relationships. As a result, the variation in total animal abundance shows aggregation among cores. The authors discuss Sokolova's concept of a deep-sea oligotrophic zone dominated by suspension feeders, and reconcile it with our present findings. The high diversity of the fauna combined with the low food level contradict theories that relate diversity directly with productivity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon's capabilities.