874 resultados para Filmic approach methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objectives of this paper are to: firstly, identify key issues related to sustainable intelligent buildings (environmental, social, economic and technological factors); develop a conceptual model for the selection of the appropriate KPIs; secondly, test critically stakeholder's perceptions and values of selected KPIs intelligent buildings; and thirdly develop a new model for measuring the level of sustainability for sustainable intelligent buildings. This paper uses a consensus-based model (Sustainable Built Environment Tool- SuBETool), which is analysed using the analytical hierarchical process (AHP) for multi-criteria decision-making. The use of the multi-attribute model for priority setting in the sustainability assessment of intelligent buildings is introduced. The paper commences by reviewing the literature on sustainable intelligent buildings research and presents a pilot-study investigating the problems of complexity and subjectivity. This study is based upon a survey perceptions held by selected stakeholders and the value they attribute to selected KPIs. It is argued that the benefit of the new proposed model (SuBETool) is a ‘tool’ for ‘comparative’ rather than an absolute measurement. It has the potential to provide useful lessons from current sustainability assessment methods for strategic future of sustainable intelligent buildings in order to improve a building's performance and to deliver objective outcomes. Findings of this survey enrich the field of intelligent buildings in two ways. Firstly, it gives a detailed insight into the selection of sustainable building indicators, as well as their degree of importance. Secondly, it tesst critically stakeholder's perceptions and values of selected KPIs intelligent buildings. It is concluded that the priority levels for selected criteria is largely dependent on the integrated design team, which includes the client, architects, engineers and facilities managers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of cellular effects by the aqueous phase of human feces (fecal water, FW) is a useful biomarker approach to study cancer risks and protective activities of food. In order to refine and develop the biomarker, different protocols of preparing FW were compared. Fecal waters were prepared by 3 methods: (A) direct centrifugation; (B) extraction of feces in PBS before centrifugation; and (C) centrifugation of lyophilized and reconstituted feces. Genotoxicity was determined in colon cells using the Comet assay. Selected samples were investigated for additional parameters related to carcinogenesis. Two of 7 FWs obtained by methods A and B were similarly genotoxic. Method B, however, yielded higher volumes of FW, allowing sterile filtration for long-term culture experiments. Four of 7 samples were non-genotoxic when prepared according to all 3 methods. FW from lyophilized feces and from fresh samples were equally genotoxic. FWs modulated cytotoxicity, paracellular permeability, and invasion, independent of their genotoxicity. All 3 methods of FW preparation can be used to assess genotoxicity. The higher volumes of FWobtained by preparation method B greatly enhance the perspectives of measuring different types of biological parameters and using these to disclose activities related to cancer development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Differential thermal expansion over the range 90-210 K has been applied successfully to determine the crystal structure of chlorothiazide from synchrotron powder diffraction data using direct methods. Key to the success of the approach is the use of a multi-data-set Pawley refinement to extract a set of reflection intensities that is more 'single-crystal-like' than those extracted from a single data set. The improvement in reflection intensity estimates is quantified by comparison with reference single-crystal intensities. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is turned to the advanced Monte Carlo methods for realistic image creation. It offers a new stratified approach for solving the rendering equation. We consider the numerical solution of the rendering equation by separation of integration domain. The hemispherical integration domain is symmetrically separated into 16 parts. First 9 sub-domains are equal size of orthogonal spherical triangles. They are symmetric each to other and grouped with a common vertex around the normal vector to the surface. The hemispherical integration domain is completed with more 8 sub-domains of equal size spherical quadrangles, also symmetric each to other. All sub-domains have fixed vertices and computable parameters. The bijections of unit square into an orthogonal spherical triangle and into a spherical quadrangle are derived and used to generate sampling points. Then, the symmetric sampling scheme is applied to generate the sampling points distributed over the hemispherical integration domain. The necessary transformations are made and the stratified Monte Carlo estimator is presented. The rate of convergence is obtained and one can see that the algorithm is of super-convergent type.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is directed to the advanced parallel Quasi Monte Carlo (QMC) methods for realistic image synthesis. We propose and consider a new QMC approach for solving the rendering equation with uniform separation. First, we apply the symmetry property for uniform separation of the hemispherical integration domain into 24 equal sub-domains of solid angles, subtended by orthogonal spherical triangles with fixed vertices and computable parameters. Uniform separation allows to apply parallel sampling scheme for numerical integration. Finally, we apply the stratified QMC integration method for solving the rendering equation. The superiority our QMC approach is proved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Movement disorders (MD) include a group of neurological disorders that involve neuromotor systems. MD can result in several abnormalities ranging from an inability to move, to severe constant and excessive movements. Strokes are a leading cause of disability affecting largely the older people worldwide. Traditional treatments rely on the use of physiotherapy that is partially based on theories and also heavily reliant on the therapists training and past experience. The lack of evidence to prove that one treatment is more effective than any other makes the rehabilitation of stroke patients a difficult task. UL motor re-learning and recovery levels tend to improve with intensive physiotherapy delivery. The need for conclusive evidence supporting one method over the other and the need to stimulate the stroke patient clearly suggest that traditional methods lack high motivational content, as well as objective standardised analytical methods for evaluating a patient's performance and assessment of therapy effectiveness. Despite all the advances in machine mediated therapies, there is still a need to improve therapy tools. This chapter describes a new approach to robot assisted neuro-rehabilitation for upper limb rehabilitation. Gentle/S introduces a new approach on the integration of appropriate haptic technologies to high quality virtual environments, so as to deliver challenging and meaningful therapies to people with upper limb impairment in consequence of a stroke. The described approach can enhance traditional therapy tools, provide therapy "on demand" and can present accurate objective measurements of a patient's progression. Our recent studies suggest the use of tele-presence and VR-based systems can potentially motivate patients to exercise for longer periods of time. Two identical prototypes have undergone extended clinical trials in the UK and Ireland with a cohort of 30 stroke subjects. From the lessons learnt with the Gentle/S approach, it is clear also that high quality therapy devices of this nature have a role in future delivery of stroke rehabilitation, and machine mediated therapies should be available to patient and his/her clinical team from initial hospital admission, through to long term placement in the patient's home following hospital discharge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transient neural assemblies mediated by synchrony in particular frequency ranges are thought to underlie cognition. We propose a new approach to their detection, using empirical mode decomposition (EMD), a data-driven approach removing the need for arbitrary bandpass filter cut-offs. Phase locking is sought between modes. We explore the features of EMD, including making a quantitative assessment of its ability to preserve phase content of signals, and proceed to develop a statistical framework with which to assess synchrony episodes. Furthermore, we propose a new approach to ensure signal decomposition using EMD. We adapt the Hilbert spectrum to a time-frequency representation of phase locking and are able to locate synchrony successfully in time and frequency between synthetic signals reminiscent of EEG. We compare our approach, which we call EMD phase locking analysis (EMDPL) with existing methods and show it to offer improved time-frequency localisation of synchrony.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to review the rationale for 'transdiagnostic' approaches to the understanding and treatment of anxiety disorders. Methods: Databases searches and examination of the reference lists of relevant studies were used to identify papers of relevance. Results: There is increasing recognition that diagnosis-specific interventions for single anxiety-disorders are of less value than might appear since a large proportion of patients have more than one co-existing anxiety disorder and the treatment of one anxiety disorder does not necessarily lead to the resolution of others. As transdiagnostic approaches have the potential to address multiple co-existing anxiety disorders they are potentially more clinically relevant than single anxiety disorder interventions. They may also have advantages in ease of dissemination and in treating anxiety disorder not otherwise specified. Conclusions: The merits of the various transdiagnostic cognitive-behavioral approaches that have been proposed are reviewed. Such approaches have potential benefits, particularly in striking the balance between completely idiosyncratic formulations and diagnosis-driven treatments of anxiety disorders. However, caution is needed to ensure that transdiagnostic theories and treatments benefit from progress made by research on diagnosis-specific treatments, and further empirical work is needed to identify the shared maintaining processes that need to be targeted in the treatment of anxiety disorders.