982 resultados para Inverse method
Resumo:
The purpose of this paper is to describe the development and to test the reliability of a new method called INTERMED, for health service needs assessment. The INTERMED integrates the biopsychosocial aspects of disease and the relationship between patient and health care system in a comprehensive scheme and reflects an operationalized conceptual approach to case mix or case complexity. The method is developed to enhance interdisciplinary communication between (para-) medical specialists and to provide a method to describe case complexity for clinical, scientific, and educational purposes. First, a feasibility study (N = 21 patients) was conducted which included double scoring and discussion of the results. This led to a version of the instrument on which two interrater reliability studies were performed. In study 1, the INTERMED was double scored for 14 patients admitted to an internal ward by a psychiatrist and an internist on the basis of a joint interview conducted by both. In study 2, on the basis of medical charts, two clinicians separately double scored the INTERMED in 16 patients referred to the outpatient psychiatric consultation service. Averaged over both studies, in 94.2% of all ratings there was no important difference between the raters (more than 1 point difference). As a research interview, it takes about 20 minutes; as part of the whole process of history taking it takes about 15 minutes. In both studies, improvements were suggested by the results. Analyses of study 1 revealed that on most items there was considerable agreement; some items were improved. Also, the reference point for the prognoses was changed so that it reflected both short- and long-term prognoses. Analyses of study 2 showed that in this setting, less agreement between the raters was obtained due to the fact that the raters were less experienced and the scoring procedure was more susceptible to differences. Some improvements--mainly of the anchor points--were specified which may further enhance interrater reliability. The INTERMED proves to be a reliable method for classifying patients' care needs, especially when used by experienced raters scoring by patient interview. It can be a useful tool in assessing patients' care needs, as well as the level of needed adjustment between general and mental health service delivery. The INTERMED is easily applicable in the clinical setting at low time-costs.
Resumo:
The application of the Fry method to measure strain in deformed porphyritic granites is discussed. This method requires that the distribution of markers has to satisfy at least two conditions. It has to be homogeneous and isotropic. Statistics on point distribution with the help of a Morishita diagram can easily test homogeneity. Isotropy can be checked with a cumulative histogram of angles between points. Application of these tests to undeformed (Mte Capanne granite, Elba) and to deformed (Randa orthogneiss, Alps of Switzerland) porphyritic granite reveals that their K-feldspars phenocrysts both satisfy these conditions and can be used as strain markers with the Fry method. Other problems are also examined. One is the possible distribution of deformation on discrete shear-bands. Providing several tests are met, we conclude that the Fry method can be used to estimate strain in deformed porphyritic granites. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The research presented in this report provides the basis for the development of a new procedure to be used by the Iowa DOT and cities and counties in the state to deal with detours. Even though the project initially focused on investigating new tools to determine condition and compensation, the focus was shifted to traffic and the gas tax method to set the basis for the new procedure. It was concluded that the condition-based approach, even though accurate and consistent condition evaluations can be achieved, is not feasible or cost effective because of the current practices of data collection (two-year cycle) and also the logistics of the procedure (before and after determination). The gas tax method provides for a simple, easy to implement, and consistent approach to dealing with compensation for use of detours. It removes the subjectivity out of the current procedures and provides for a more realistic (traffic based) approach to the compensation determination.
Resumo:
We describe a simple method to achieve both hemostasis and stabilization of the left anterior descending coronary artery during minimally invasive coronary artery bypass grafting. This technique allows the surgeon to perform a precise anastomosis of the left internal mammary artery to the target vessel on a beating heart.
Resumo:
Morphological descriptors are practical and essential biomarkers for diagnosis andtreatment selection for intracranial aneurysm management according to the current guidelinesin use. Nevertheless, relatively little work has been dedicated to improve the three-dimensionalquanti cation of aneurysmal morphology, automate the analysis, and hence reduce the inherentintra- and inter-observer variability of manual analysis. In this paper we propose a methodologyfor the automated isolation and morphological quanti cation of saccular intracranial aneurysmsbased on a 3D representation of the vascular anatomy.
Resumo:
In this paper a method for extracting semantic informationfrom online music discussion forums is proposed. The semantic relations are inferred from the co-occurrence of musical concepts in forum posts, using network analysis. The method starts by defining a dictionary of common music terms in an art music tradition. Then, it creates a complex network representation of the online forum by matchingsuch dictionary against the forum posts. Once the complex network is built we can study different network measures, including node relevance, node co-occurrence andterm relations via semantically connecting words. Moreover, we can detect communities of concepts inside the forum posts. The rationale is that some music terms are more related to each other than to other terms. All in all, this methodology allows us to obtain meaningful and relevantinformation from forum discussions.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
Epidemiological and biochemical studies show that the sporadic forms of Alzheimer's disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events-mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
AIM: To prospectively study the intraocular pressure (IOP) lowering effect and safety of the new method of very deep sclerectomy with collagen implant (VDSCI) compared with standard deep sclerectomy with collagen implant (DSCI). METHODS: The trial involved 50 eyes of 48 patients with medically uncontrolled primary and secondary open-angle glaucoma, randomized to undergo either VDSCI procedure (25 eyes) or DSCI procedure (25 eyes). Follow-up examinations were performed before surgery and after surgery at day 1, at week 1, at months 1, 2, 3, 6, 9, 12, 18, and 24 months. Ultrasound biomicroscopy was performed at 3 and 12 months. RESULTS: Mean follow-up period was 18.6+/-5.9 (VDSCI) and 18.9+/-3.6 (DSCI) months (P=NS). Mean preoperative IOP was 22.4+/-7.4 mm Hg for VDSCI and 20.4+/-4.4 mm Hg for DSCI eyes (P=NS). Mean postoperative IOP was 3.9+/-2.3 (VDSCI) and 6.3+/-4.3 (DSCI) (P<0.05) at day 1, and 12.2+/-3.9 (VDSCI) and 13.3+/-3.4 (DSCI) (P=NS) at month 24. At the last visit, the complete success rate (defined as an IOP of < or =18 mm Hg and a percentage drop of at least 20%, achieved without medication) was 57% in VDSCI and 62% in DSCI eyes (P=NS) ultrasound biomicroscopy at 12 months showed a mean volume of the subconjunctival filtering bleb of 3.9+/-4.2 mm3 (VDSCI) and 6.8+/-7.5 mm3 (DSCI) (P=0.426) and 5.2+/-3.6 mm3 (VDSCI) and 5.4+/-2.9 mm3 (DSCI) (P=0.902) for the intrascleral space. CONCLUSIONS: Very deep sclerectomy seems to provide stable and good control of IOP at 2 years of follow-up with few postoperative complications similar to standard deep sclerectomy with the collagen implant.
Resumo:
BACKGROUND: Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. OBJECTIVE: To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. MATERIALS AND METHODS: Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI(vol) 4.8-7.9 mGy, DLP 37.1-178.9 mGy·cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. RESULTS: The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. CONCLUSION: Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone.
Resumo:
The overall thermogenic response to food intake measured over a whole day in 20 young nondiabetic obese women (body fat mean +/- SEM: 38.6 +/- 0.7%), was compared with that obtained in eight nonobese control women (body fat: 24.7 +/- 0.9%). The energy expenditure of the subjects was continuously measured over 24 h with a respiration chamber, and the spontaneous activity was assessed by a radar system. A new approach was used to obtain the integrated thermogenic response to the three meals ingested over the day (from 8:30 AM to 10:30 PM). This method allows to subtract the energy expended for physical activity from total energy expenditure and to calculate the integrated dietary-induced thermogenesis as the difference between the energy expended without physical activity and basal metabolic rate. The thermogenic response to the three meals (expressed in percentage of the total energy ingested) was found to be blunted in obese women (8.7 +/- 0.8%) as compared with that of controls (14.8 +/- 1.1%). There was an inverse correlation between the percentage body fat and the diet-induced thermogenesis (r = -0.61, p less than 0.001). In addition, the relative increase in diurnal urinary norepinephrine excretion was lower in obese than in the control subjects. It is concluded that a low overall thermogenic response to feeding may be a contributing factor for energy storage in some obese subjects; a blunted response of the sympathetic nervous system could explain this low thermogenic response.
Inverse association between circulating vitamin D and mortality-dependent on sex and cause of death?
Resumo:
BACKGROUND AND AIMS: In various populations, vitamin D deficiency is associated with chronic diseases and mortality. We examined the association between concentration of circulating 25-hydroxyvitamin D [25(OH)D], a marker of vitamin D status, and all-cause as well as cause-specific mortality. METHODS AND RESULTS: The study included 3404 participants of the general adult Swiss population, who were recruited between November 1988 and June 1989 and followed-up until the end of 2008. Circulating 25(OH)D was measured by protein-bound assay. Cox proportional hazards regression was used to examine the association between 25(OH)D concentration and all-cause and cause-specific mortality adjusting for sex, age, season, diet, nationality, blood pressure, and smoking status. Per 10 ng/mL increase in 25(OH)D concentration, all-cause mortality decreased by 20% (HR = 0.83; 95% CI 0.74-0.92). 25(OH)D concentration was inversely associated with cardiovascular mortality in women (HR = 0.68, 95% CI 0.46-1.00 per 10 ng/mL increase), but not in men (HR = 0.97; 95% CI 0.77-1.23). In contrast, 25(OH)D concentration was inversely associated with cancer mortality in men (HR = 0.72, 95% CI 0.57-0.91 per 10 ng/mL increase), but not in women (HR = 1.14, 95% CI 0.93-1.39). Multivariate adjustment only slightly modified the 25(OH)D-mortality association. CONCLUSION: 25(OH)D was similarly inversely related to all-cause mortality in men and women. However, we observed opposite effects in women and men with respect to cardiovascular and cancer mortality.