873 resultados para Multiple escales method
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.
Resumo:
Background: MPLC represents a diagnostic challenge. Topic of the discussion is how to distinguish these patients as a metastatic or a multifocal disease. While in case of the different histology there are less doubt on the opposite in case of same histology is mandatory to investigate on other clinical features to rule out this question. Matherials and Methods: A retrospective review identified all patients treated surgically for a presumed diagnosis of SPLC. Pre-operative staging was obtained with Total CT scan and fluoro-deoxy positron emission tomography and mediastinoscopy. Patients with nodes interest or extra-thoracic location were excluded from this study. Epidermal growth factor receptor (EGFR) expression with complete immunohistochemical analisis was evaluated. Survival was estimated using Kaplan-Meyer method, and clinical features were estimated using a long-rank test or Cox proportional hazards model for categorical and continuous variable, respectively. Results: According to American College Chest Physician, 18 patients underwent to surgical resection for a diagnosis of MPLC. Of these, 8 patients had 3 or more nodules while 10 patients had less than 3 nodules. Pathologic examination demonstrated that 13/18(70%) of patients with multiple histological types was Adenocarcinoma, 2/18(10%) Squamous carcinoma, 2/18(10%) large cell carcinoma and 1/18(5%) Adenosquamosu carcinoma. Expression of EGFR has been evaluated in all nodules: in 7 patients of 18 (38%) the percentage of expression of each nodule resulted different. Conclusions: MPLC represent a multifocal disease where interactions of clinical informations with biological studies reinforce the diagnosis. EGFR could contribute to differentiate the nodules. However, further researches are necessary to validate this hypothesis.
Resumo:
The development and the growth of plants is strongly affected by the interactions between roots, rootrnassociated organisms and rhizosphere communities. Methods to assess such interactions are hardly torndevelop particularly in perennial and woody plants, due to their complex root system structure and theirrntemporal change in physiology patterns. In this respect, grape root systems are not investigated veryrnwell. The aim of the present work was the development of a method to assess and predict interactionsrnat the root system of rootstocks (Vitis berlandieri x Vitis riparia) in field. To achieve this aim, grapernphylloxera (Daktulosphaira vitifoliae Fitch, Hemiptera, Aphidoidea) was used as a graperoot parasitizingrnmodel.rnTo develop the methodical approach, a longt-term trial (2006-2009) was arranged on a commercial usedrnvineyard in Geisenheim/Rheingau. All 2 to 8 weeks the top most 20 cm of soil under the foliage wallrnwere investigated and root material was extracted (n=8-10). To include temporal, spatial and cultivarrnspecific root system dynamics, the extracted root material was analyzed digitally on the morphologicalrnproperties. The grape phylloxera population was quantified and characterized visually on base of theirrnlarvalstages (oviparous, non oviparous and winged preliminary stages). Infection patches (nodosities)rnwere characterized visually as well, partly supported by digital root color analyses. Due to the knownrneffects of fungal endophytes on the vitality of grape phylloxera infested grapevines, fungal endophytesrnwere isolated from nodosity and root tissue and characterized (morphotypes) afterwards. Further abioticrnand biotic soil conditions of the vineyards were assessed. The temporal, spatial and cultivar specificrnsensitivity of single parameters were analyzed by omnibus tests (ANOVAs) and adjacent post-hoc tests.rnThe relations between different parameters were analyzed by multiple regression models.rnQuantitative parameters to assess the degeneration of nodosity, the development nodosity attachedrnroots and to differentiate between nodosities and other root swellings in field were developed. Significantrndifferences were shown between root dynamic including parameters and root dynamic ignoringrnparameters. Regarding the description of grape phylloxera population and root system dynamic, thernmethod showed a high temporal, spatial and cultivar specific sensitivity. Further, specific differencesrncould be shown in the frequency of endophyte morphotypes between root and nodosity tissue as wellrnas between cultivars. Degeneration of nodosities as well as nodosity occupation rates could be relatedrnto the calculated abundances of grape phylloxera population. Further ecological questions consideringrngrape root development (e.g. relation between moisture and root development) and grape phylloxerarnpopulation development (e.g. relation between temperature and population structure) could be answeredrnfor field conditions.rnGenerally, the presented work provides an approach to evaluate vitality of grape root systems. Thisrnapproach can be useful, considering the development of control strategies against soilborne pests inrnviticulture (e.g. grape phylloxera, Sorospheara viticola, Roesleria subterranea (Weinm.) Redhaed) as well as considering the evaluation of integrated management systems in viticulture.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Microneurography is a method suitable for recording intraneural single or multiunit action potentials in conscious subjects. Microneurography has rarely been applied to animal experiments, where more invasive methods, like the teased fiber recording technique, are widely used. We have tested the feasibility of microneurographic recordings from the peripheral nerves of rats. Tungsten microelectrodes were inserted into the sciatic nerve at mid-thigh level. Single or multiunit action potentials evoked by regular electrical stimulation were recorded, digitized and displayed as a raster plot of latencies. The method allows unambiguous recording and recognition of single C-fiber action potentials from an in vivo preparation, with minimal disruption of the nerve being recorded. Multiple C-fibers can be recorded simultaneously for several hours, and if the animal is allowed to recover, repeated recording sessions can be obtained from the same nerve at the same level over a period of weeks or months. Also, single C units can be functionally identified by their changes in latency to natural stimuli, and insensitive units can be recognized as 'silent' nociceptors or sympathetic efferents by their distinctive profiles of activity-dependent slowing during repetitive electrical stimulation, or by the effect on spontaneous efferent activity of a proximal anesthetic block. Moreover, information about the biophysical properties of C axons can be obtained from their latency recovery cycles. Finally, we show that this preparation is potentially suitable for the study of C-fiber behavior in models of neuropathies and nerve lesions, both under resting conditions and in response to drug administration.
Resumo:
The PM3 quantum-mechanical method is able to model the magic water clusters (H20),, and (H20)&+. Results indicate that the H30+ ion is tightly bound within the (H20),, cluster by multiple hydrogen bonds, causing deformation to the symmetric (HzO),, pentagonal dodecahedron structure. The structures, energetics, and hydrogen bond patterns of six local minima (H20)21H+ clusters are presented.
Resumo:
The chemotherapeutic drug 5-fluorouracil (5-FU) is widely used for treating solid tumors. Response to 5-FU treatment is variable with 10-30% of patients experiencing serious toxicity partly explained by reduced activity of dihydropyrimidine dehydrogenase (DPD). DPD converts endogenous uracil (U) into 5,6-dihydrouracil (UH(2) ), and analogously, 5-FU into 5-fluoro-5,6-dihydrouracil (5-FUH(2) ). Combined quantification of U and UH(2) with 5-FU and 5-FUH(2) may provide a pre-therapeutic assessment of DPD activity and further guide drug dosing during therapy. Here, we report the development of a liquid chromatography-tandem mass spectrometry assay for simultaneous quantification of U, UH(2) , 5-FU and 5-FUH(2) in human plasma. Samples were prepared by liquid-liquid extraction with 10:1 ethyl acetate-2-propanol (v/v). The evaporated samples were reconstituted in 0.1% formic acid and 10 μL aliquots were injected into the HPLC system. Analyte separation was achieved on an Atlantis dC(18) column with a mobile phase consisting of 1.0 mm ammonium acetate, 0.5 mm formic acid and 3.3% methanol. Positively ionized analytes were detected by multiple reaction monitoring. The analytical response was linear in the range 0.01-10 μm for U, 0.1-10 μm for UH(2) , 0.1-75 μm for 5-FU and 0.75-75 μm for 5-FUH(2) , covering the expected concentration ranges in plasma. The method was validated following the FDA guidelines and applied to clinical samples obtained from ten 5-FU-treated colorectal cancer patients. The present method merges the analysis of 5-FU pharmacokinetics and DPD activity into a single assay representing a valuable tool to improve the efficacy and safety of 5-FU-based chemotherapy.
Resumo:
OBJECTIVE: To compare the individual latency distributions of motor evoked potentials (MEP) in patients with multiple sclerosis (MS) to the previously reported results in healthy subjects (Firmin et al., 2011). METHODS: We applied the previously reported method to measure the distribution of MEP latencies to 16 patients with MS. The method is based on transcranial magnetic stimulation and consists of a combination of the triple stimulation technique with a method originally developed to measure conduction velocity distributions in peripheral nerves. RESULTS: MEP latency distributions in MS typically showed two peaks. The individual MEP latency distributions were significantly wider in patients with MS than in healthy subjects. The mean triple stimulation delay extension at the 75% quantile, a proxy for MEP latency distribution width, was 7.3ms in healthy subjects and 10.7ms in patients with MS. CONCLUSIONS: In patients with MS, slow portions of the central motor pathway contribute more to the MEP than in healthy subjects. The bimodal distribution found in healthy subjects is preserved in MS. SIGNIFICANCE: Our method to measure the distribution of MEP latencies is suitable to detect alterations in the relative contribution of corticospinal tract portions with long MEP latencies to motor conduction.
Resumo:
BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.
Resumo:
Multiple outcomes data are commonly used to characterize treatment effects in medical research, for instance, multiple symptoms to characterize potential remission of a psychiatric disorder. Often either a global, i.e. symptom-invariant, treatment effect is evaluated. Such a treatment effect may over generalize the effect across the outcomes. On the other hand individual treatment effects, varying across all outcomes, are complicated to interpret, and their estimation may lose precision relative to a global summary. An effective compromise to summarize the treatment effect may be through patterns of the treatment effects, i.e. "differentiated effects." In this paper we propose a two-category model to differentiate treatment effects into two groups. A model fitting algorithm and simulation study are presented, and several methods are developed to analyze heterogeneity presenting in the treatment effects. The method is illustrated using an analysis of schizophrenia symptom data.
Resumo:
Visualization and exploratory analysis is an important part of any data analysis and is made more challenging when the data are voluminous and high-dimensional. One such example is environmental monitoring data, which are often collected over time and at multiple locations, resulting in a geographically indexed multivariate time series. Financial data, although not necessarily containing a geographic component, present another source of high-volume multivariate time series data. We present the mvtsplot function which provides a method for visualizing multivariate time series data. We outline the basic design concepts and provide some examples of its usage by applying it to a database of ambient air pollution measurements in the United States and to a hypothetical portfolio of stocks.
Resumo:
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the dataset, operating characteristics of the commonly used genotyping procedures and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard.
Resumo:
OBJECTIVE: A previous study of radiofrequency neurotomy of the articular branches of the obturator nerve for hip joint pain produced modest results. Based on an anatomical and radiological study, we sought to define a potentially more effective radiofrequency method. DESIGN: Ten cadavers were studied, four of them bilaterally. The obturator nerve and its articular branches were marked by wires. Their radiological relationship to the bone structures on fluoroscopy was imaged and analyzed. A magnetic resonance imaging (MRI) study was undertaken on 20 patients to determine the structures that would be encountered by the radiofrequency electrode during different possible percutaneous approaches. RESULTS: The articular branches of the obturator nerve vary in location over a wide area. The previously described method of denervating the hip joint did not take this variation into account. Moreover, it approached the nerves perpendicularly. Because optimal coagulation requires electrodes to lie parallel to the nerves, a perpendicular approach probably produced only a minimal lesion. In addition, MRI demonstrated that a perpendicular approach is likely to puncture femoral vessels. Vessel puncture can be avoided if an oblique pass is used. Such an approach minimizes the angle between the target nerves and the electrode, and increases the likelihood of the nerve being captured by the lesion made. Multiple lesions need to be made in order to accommodate the variability in location of the articular nerves. CONCLUSIONS: The method that we described has the potential to produce complete and reliable nerve coagulation. Moreover, it minimizes the risk of penetrating the great vessels. The efficacy of this approach should be tested in clinical trials.
Resumo:
The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.