956 resultados para Interior point methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

High dimensional biomimetic informatics (HDBI) is a novel theory of informatics developed in recent years. Its primary object of research is points in high dimensional Euclidean space, and its exploratory and resolving procedures are based on simple geometric computations. However, the mathematical descriptions and computing of geometric objects are inconvenient because of the characters of geometry. With the increase of the dimension and the multiformity of geometric objects, these descriptions are more complicated and prolix especially in high dimensional space. In this paper, we give some definitions and mathematical symbols, and discuss some symbolic computing methods in high dimensional space systematically from the viewpoint of HDBI. With these methods, some multi-variables problems in high dimensional space can be solved easily. Three detailed algorithms are presented as examples to show the efficiency of our symbolic computing methods: the algorithm for judging the center of a circle given three points on this circle, the algorithm for judging whether two points are on the same side of a hyperplane, and the algorithm for judging whether a point is in a simplex constructed by points in high dimensional space. Two experiments in blurred image restoration and uneven lighting image correction are presented for all these algorithms to show their good behaviors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of new single-step methods and their corresponding algorithms with automatic step size adjustment for model equations of fiber Raman amplifiers are proposed and compared in this paper. On the basis of the Newton-Raphson method, multiple shooting algorithms for the two-point boundary value problems involved in solving Raman amplifier propagation equations are constructed. A verified example shows that, compared with the traditional Runge-Kutta methods, the proposed methods can increase the accuracy by more than two orders of magnitude under the same conditions. The simulations for Raman amplifier propagation equations demonstrate that our methods can increase the computing speed by more than 5 times, extend the step size significantly, and improve the stability in comparison with the Dormand-Prince method. The numerical results show that the combination of the multiple shooting algorithms and the proposed methods has the capacity to rapidly and effectively solve the model equations of multipump Raman amplifiers under various conditions such as co-, counter- and bi-directionally pumped schemes, as well as dual-order pumped schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fully relaxed single-bond torsional potentials and orientation-related rotational potentials of 2,2'-bithiophene (BT) under the interaction of an external electric field (EF) constructed by point charges have been evaluated with semi-empirical AMI and PM3 calculations. The torsional potentials are sensitive to both EF strength and direction. While the EF is parallel to the molecular long axis, the torsional barrier around C-x-C-x' bond obviously rises with increasing the EF strength, whereas the relative energies of syn and anti minima show a slight change. The interaction between the EF and the induced dipole moment has been proposed to elucidate this observation. On the other hand, the relative energy difference between the syn and anti minima shows an obvious change, while the EF is perpendicular to the molecular long axis. This feature has been ascribed to the interaction between the EF and the permanent dipole moment of BT. Furthermore, conformational and orientational analyses in two dimensions have been carried out by changing the torsional and rotational angles in the different EF. The conformation and orientation of a gas-phase BT in the EF are governed by both the above factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conformational analysis of 2,2'-bithiophene (BT) under the influence of an electric field (EF) constructed by point charges has been performed by using semi-empirical Austin Model 1 (AM1) and Parametric model number 3 (PM3) calculations. When the EF perpendicular to the molecular conjugation chain is applied, both AM1 and PM3 calculations show an energy increase of the anti-conformation. AM1 predicts that the global minimum shifts to syn-conformation when the EF strength is larger than a critical value. and PM predicts that the local minimum in anti-conformation vanishes. This kind of EF effect has been ascribed to the EF and dipole moment interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geophysical inversion is a theory that transforms the observation data into corresponding geophysical models. The goal of seismic inversion is not only wave velocity models, but also the fine structures and dynamic process of interior of the earth, expanding to more parameters such as density, aeolotropism, viscosity and so on. As is known to all, Inversion theory is divided to linear and non-linear inversion theories. In rencent 40 years linear inversion theory has formed into a complete and systematic theory and found extensive applications in practice. While there are still many urgent problems to be solved in non-linear inversion theory and practice. Based on wave equation, this dissertation has been mainly involved in the theoretical research of several non-linear inversion methods: waveform inversion, traveltime inversion and the joint inversion about two methods. The objective of gradient waveform inversion is to find a geologic model, thus synthetic seismograms generated by this geologic model are best fitted to observed seismograms. Contrasting with other inverse methods, waveform inversion uses all characteristics of waveform and has high resolution capacity. But waveform inversion is an interface by interface method. An artificial parameter limit should be provided in each inversion iteration. In addition, waveform information will tend to get stuck in local minima if the starting model is too far from the actual model. Based on velocity scanning in traditional seismic data processing, a layer-by-layer waveform inversion method is developed in this dissertation to deal with weaknesses of waveform inversion. Wave equation is used to calculate the traveltime and derivative (perturbation of traveltime with respect to velocity) in wave-equation traveltime inversion (WT). Unlike traditional ray-based travetime inversion, WT has many advantages. No ray tracing or traveltime picking and no high frequency assumption is necessary and good result can be got while starting model is far from real model. But, comparing with waveform inversion, WT has low resolution. Waveform inversion and WT have complementary advantages and similar algorithm, which proves that the joint inversion is a better inversion method. And another key point which this dissertation emphasizes is how to give fullest play to their complementary advantages on the premise of no increase of storage spaces and amount of calculation. Numerical tests are implemented to prove the feasibility of inversion methods mentioned above in this dissertation. Especially for gradient waveform inversion, field data are inversed. This field data are acquired by our group in Wali park and Shunyi district. Real data processing shows there are many problems for waveform inversion to deal with real data. The matching of synthetic seismograms with observed seismograms and noise cancellation are two primary problems. In conclusion, on the foundation of the former experiences, this dissertation has implemented waveform inversions on the basis of acoustic wave equation and elastic wave equation, traveltime inversion on the basis of acoustic wave equation and traditional combined waveform traveltime inversion. Besides the traditional analysis of inversion theory, there are two innovations: layer by layer inversion of seimic reflection data inversion and rapid method for acoustic wave-equation joint inversion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the fractal theories, contractive mapping principles as well as the fixed point theory, by means of affine transform, this dissertation develops a novel Explicit Fractal Interpolation Function(EFIF)which can be used to reconstruct the seismic data with high fidelity and precision. Spatial trace interpolation is one of the important issues in seismic data processing. Under the ideal circumstances, seismic data should be sampled with a uniform spatial coverage. However, practical constraints such as the complex surface conditions indicate that the sampling density may be sparse or for other reasons some traces may be lost. The wide spacing between receivers can result in sparse sampling along traverse lines, thus result in a spatial aliasing of short-wavelength features. Hence, the method of interpolation is of very importance. It not only needs to make the amplitude information obvious but the phase information, especially that of the point that the phase changes acutely. Many people put forward several interpolation methods, yet this dissertation focuses attention on a special class of fractal interpolation function, referred to as explicit fractal interpolation function to improve the accuracy of the interpolation reconstruction and to make the local information obvious. The traditional fractal interpolation method mainly based on the randomly Fractional Brown Motion (FBM) model, furthermore, the vertical scaling factor which plays a critical role in the implementation of fractal interpolation is assigned the same value during the whole interpolating process, so it can not make the local information obvious. In addition, the maximal defect of the traditional fractal interpolation method is that it cannot obtain the function values on each interpolating nodes, thereby it cannot analyze the node error quantitatively and cannot evaluate the feasibility of this method. Detailed discussions about the applications of fractal interpolation in seismology have not been given by the pioneers, let alone the interpolating processing of the single trace seismogram. On the basis of the previous work and fractal theory this dissertation discusses the fractal interpolation thoroughly and the stability of this special kind of interpolating function is discussed, at the same time the explicit presentation of the vertical scaling factor which controls the precision of the interpolation has been proposed. This novel method develops the traditional fractal interpolation method and converts the fractal interpolation with random algorithms into the interpolation with determined algorithms. The data structure of binary tree method has been applied during the process of interpolation, and it avoids the process of iteration that is inevitable in traditional fractal interpolation and improves the computation efficiency. To illustrate the validity of the novel method, this dissertation develops several theoretical models and synthesizes the common shot gathers and seismograms and reconstructs the traces that were erased from the initial section using the explicit fractal interpolation method. In order to compare the differences between the theoretical traces that were erased in the initial section and the resulting traces after reconstruction on waveform and amplitudes quantitatively, each missing traces are reconstructed and the residuals are analyzed. The numerical experiments demonstrate that the novel fractal interpolation method is not only applicable to reconstruct the seismograms with small offset but to the seismograms with large offset. The seismograms reconstructed by explicit fractal interpolation method resemble the original ones well. The waveform of the missing traces could be estimated very well and also the amplitudes of the interpolated traces are a good approximation of the original ones. The high precision and computational efficiency of the explicit fractal interpolation make it a useful tool to reconstruct the seismic data; it can not only make the local information obvious but preserve the overall characteristics of the object investigated. To illustrate the influence of the explicit fractal interpolation method to the accuracy of the imaging of the structure in the earth’s interior, this dissertation applies the method mentioned above to the reverse-time migration. The imaging sections obtained by using the fractal interpolated reflected data resemble the original ones very well. The numerical experiments demonstrate that even with the sparse sampling we can still obtain the high accurate imaging of the earth’s interior’s structure by means of the explicit fractal interpolation method. So we can obtain the imaging results of the earth’s interior with fine quality by using relatively small number of seismic stations. With the fractal interpolation method we will improve the efficiency and the accuracy of the reverse-time migration under economic conditions. To verify the application effect to real data of the method presented in this paper, we tested the method by using the real data provided by the Broadband Seismic Array Laboratory, IGGCAS. The results demonstrate that the accuracy of explicit fractal interpolation is still very high even with the real data with large epicenter and large offset. The amplitudes and the phase of the reconstructed station data resemble the original ones that were erased in the initial section very well. Altogether, the novel fractal interpolation function provides a new and useful tool to reconstruct the seismic data with high precision and efficiency, and presents an alternative to image the deep structure of the earth accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large earthquakes, such as the Chile earthquake in 1960 and the Sumatra-Andaman earthquake on Dec 26, 2004 in Indonesia, have generated the Earth’s free oscillations. The eigenfrequencies of the Earth’s free oscillations are closely related to the Earth’s internal structures. The conventional methods, which mainly focus on calculating the eigenfrequecies by analytical ways, and the analysis on observations can not easily study the whole processes from earthquake occurrence to the Earth’s free oscillation inspired. Therefore, we try to use numerical method incorporated with large-scale parallel computing to study on the Earth’s free oscillations excited by giant earthquakes. We first give a review of researches and developments of the Earth’s free oscillation, and basical theories under spherical coordinate system. We then give a review of the numerical simulation of seismic wave propagation and basical theories of spectral element method to simulate global seismic wave propagation. As a first step to study the Earth’s free oscillations, we use a finite element method to simulate the propagation of elastic waves and the generation of oscillations of the chime bell of Marquis Yi of Zeng, by striking different parts of the bell, which possesses the oval crosssection. The bronze chime bells of Marquis Yi of Zeng are precious cultural relics of China. The bells have a two-tone acoustic characteristic, i.e., striking different parts of the bell generates different tones. By analysis of the vibration in the bell and the spectrum analysis, we further help the understanding of the mechanism of two-tone acoustic characteristics of the chime bell of Marquis Yi of Zeng. The preliminary calculations have clearly shown that two different modes of oscillation can be generated by striking different parts of the bell, and indicate that finite element numerical simulation of the processes of wave propagation and two-tone generation of the chime bell of Marquis Yi of Zeng is feasible. These analyses provide a new quantitative and visual way to explain the mystery of the two-tone acoustic characteristics. The method suggested by this study can be applied to simulate free oscillations excited by great earthquakes with complex Earth structure. Taking into account of such large-scale structure of the Earth, small-scale low-precision numerical simulation can not simply meet the requirement. The increasing capacity in high-performance parallel computing and progress on fully numerical solutions for seismic wave fields in realistic three-dimensional spherical models, Spectral element method and high-performance parallel computing were incorporated to simulate the seismic wave propagation processes in the Earth’s interior, without the effects of the Earth’s gravitational potential. The numerical simulation shows that, the results of the toroidal modes of our calculation agree well with the theoretical values, although the accuracy of our results is much limited, the calculated peaks are little distorted due to three-dimensional effects. There exist much great differences between our calculated values of spheroidal modes and theoretical values, because we don’t consider the effect the Earth’ gravitation in numerical model, which leads our values are smaller than the theoretical values. When , is much smaller, the effect of the Earth’s gravitation make the periods of spheroidal modes become shorter. However, we now can not consider effects of the Earth’s gravitational potential into the numerical model to simulate the spheroidal oscillations, but those results still demonstrate that, the numerical simulation of the Earth’s free oscillation is very feasible. We make the numerical simulation on processes of the Earth’s free oscillations under spherically symmetric Earth model using different special source mechanisms. The results quantitatively show that Earth’s free oscillations excited by different earthquakes are different, and oscillations at different locations are different for free oscillation excited by the same earthquake. We also explore how the Earth’s medium attenuation will take effects on the Earth’s free oscillations, and take comparisons with the observations. The medium attenuation can make influences on the Earth’s free oscillations, though the effects on lower-frequency fundamental oscillations are weak. At last, taking 2008 Wenchuan earthquake for example, we employ spectral element method incorporated with large-scale parallel computing technology to investigate the characteristics of seismic wave propagation excited by Wenchuan earthquake. We calculate synthetic seismograms with one-point source model and three-point source model respectively. Full 3-D visualization of the numerical results displays the profile of the seismic wave propagation with respect to time. The three-point source, which was proposed by the latest investigations through field observation and reverse estimation, can better demonstrate the spatial and temporal characteristics of the source rupture processes than one-point source. Primary results show that those synthetic signals calculated from three-point source agree well with the observations. This can further reveal that the source rupturing process of Wenchuan earthquake is a multi-rupture process, which is composed by at least three or more stages of rupture processes. In conclusion, the numerical simulation can not only solve some problems concluding the Earth’s ellipticity and anisotropy, which can be easily solved by conventional methods, but also finally solve the problems concluding topography model and lateral heterogeneity. We will try to find a way to fully implement self-gravitation in spectral element method in future, and do our best to continue researching the Earth’s free oscillations using the numerical simulations to see how the Earth’ lateral heterogeneous will affect the Earth’s free oscillations. These will make it possible to bring modal spectral data increasingly to bear on furthering our understanding of the Earth’s three-dimensional structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Sensor-augmented pump therapy (SAPT) integrates real-time continuous glucose monitoring (RT-CGM) with continuous subcutaneous insulin infusion (CSII) and offers an alternative to multiple daily injections (MDI). Previous studies provide evidence that SAPT may improve clinical outcomes among people with type 1 diabetes. Sensor-Augmented Pump Therapy for A1c Reduction (STAR) 3 is a multicenter randomized controlled trial comparing the efficacy of SAPT to that of MDI in subjects with type 1 diabetes. METHODS: Subjects were randomized to either continue with MDI or transition to SAPT for 1 year. Subjects in the MDI cohort were allowed to transition to SAPT for 6 months after completion of the study. SAPT subjects who completed the study were also allowed to continue for 6 months. The primary end point was the difference between treatment groups in change in hemoglobin A1c (HbA1c) percentage from baseline to 1 year of treatment. Secondary end points included percentage of subjects with HbA1c < or =7% and without severe hypoglycemia, as well as area under the curve of time spent in normal glycemic ranges. Tertiary end points include percentage of subjects with HbA1c < or =7%, key safety end points, user satisfaction, and responses on standardized assessments. RESULTS: A total of 495 subjects were enrolled, and the baseline characteristics similar between the SAPT and MDI groups. Study completion is anticipated in June 2010. CONCLUSIONS: Results of this randomized controlled trial should help establish whether an integrated RT-CGM and CSII system benefits patients with type 1 diabetes more than MDI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Acute febrile respiratory illnesses, including influenza, account for a large proportion of ambulatory care visits worldwide. In the developed world, these encounters commonly result in unwarranted antibiotic prescriptions; data from more resource-limited settings are lacking. The purpose of this study was to describe the epidemiology of influenza among outpatients in southern Sri Lanka and to determine if access to rapid influenza test results was associated with decreased antibiotic prescriptions.

Methods: In this pretest- posttest study, consecutive patients presenting from March 2013- April 2014 to the Outpatient Department of the largest tertiary care hospital in southern Sri Lanka were surveyed for influenza-like illness (ILI). Patients meeting World Health Organization criteria for ILI-- acute onset of fever ≥38.0°C and cough in the prior 7 days--were enrolled. Consenting patients were administered a structured questionnaire, physical examination, and nasal/nasopharyngeal sampling. Rapid influenza A/B testing (Veritor System, Becton Dickinson) was performed on all patients, but test results were only released to patients and clinicians during the second phase of the study (December 2013- April 2014).

Results: We enrolled 397 patients with ILI, with 217 (54.7%) adults ≥12 years and 188 (47.4%) females. A total of 179 (45.8%) tested positive for influenza by rapid testing, with April- July 2013 and September- November 2013 being the periods with the highest proportion of ILI due to influenza. A total of 310 (78.1%) patients with ILI received a prescription for an antibiotic from their outpatient provider. The proportion of patients prescribed antibiotics decreased from 81.4% in the first phase to 66.3% in the second phase (p=.005); among rapid influenza-positive patients, antibiotic prescriptions decreased from 83.7% in the first phase to 56.3% in the second phase (p=.001). On multivariable analysis, having a positive rapid influenza test available to clinicians was associated with decreased antibiotic use (OR 0.20, 95% CI 0.05- 0.82).

Conclusions: Influenza virus accounted for almost 50% of acute febrile respiratory illness in this study, but most patients were prescribed antibiotics. Providing rapid influenza test results to clinicians was associated with fewer antibiotic prescriptions, but overall prescription of antibiotics remained high. In this developing country setting, a multi-faceted approach that includes improved access to rapid diagnostic tests may help decrease antibiotic use and combat antimicrobial resistance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. A magneto-hydrostatic model is constructed with spectropolarimetric properties close to those of solar photospheric magnetic bright points.
Methods. Results of solar radiative magneto-convection simulations are used to produce the spatial structure of the vertical component of the magnetic field. The horizontal component of magnetic field is reconstructed using the self-similarity condition, while the magneto-hydrostatic equilibrium condition is applied to the standard photospheric model with the magnetic field embedded. Partial ionisation processes are found to be necessary for reconstructing the correct temperature structure of the model.
Results. The structures obtained are in good agreement with observational data. By combining the realistic structure of the magnetic field with the temperature structure of the quiet solar photosphere, the continuum formation level above the equipartition layer can be found. Preliminary results are shown of wave propagation through this magnetic structure. The observational consequences of the oscillations are examined in continuum intensity and in the Fe I 6302 angstrom magnetically sensitive line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In 2005, the European Commission recommended that all member states should establish or strengthen surveillance systems for monitoring the use of antimicrobial agents. There is no evidence in the literature of any surveillance studies having been specifically conducted in nursing homes (NHs) in Northern Ireland (NI).

OBJECTIVE: The aim of this study was to determine the prevalence of antimicrobial prescribing and its relationship with certain factors (e.g. indwelling urinary catheterization, urinary incontinence, disorientation, etc.) in NH residents in NI.

METHODS: This project was carried out in NI as part of a wider European study under the protocols of the European Surveillance of Antimicrobial Consumption group. Two point-prevalence surveys (PPSs) were conducted in 30 NHs in April and November 2009. Data were obtained from nursing notes, medication administration records and staff in relation to antimicrobial prescribing, facility and resident characteristics and were analysed descriptively.

RESULTS: The point prevalence of antimicrobial prescribing was 13.2% in April 2009 and 10.7% in November 2009, with a 10-fold difference existing between the NHs with the highest and lowest antimicrobial prescribing prevalence during both PPSs. The same NH had the highest rate of antimicrobial prescribing during both April (30.6%) and November (26.0%). The group of antimicrobials most commonly prescribed was the penicillins (April 28.6%, November 27.5%) whilst the most prevalent individual antimicrobial prescribed was trimethoprim (April 21.3%, November 24.3%). The majority of antimicrobials were prescribed for the purpose of preventing urinary tract infections (UTIs) in both April (37.8%) and in November (46.7%), with 5% of all participating residents being prescribed an antimicrobial for this reason. Some (20%) antimicrobials were prescribed at inappropriate doses, particularly those which were used for the purpose of preventing UTIs. Indwelling urinary catheterization and wounds were significant risk factors for antimicrobial use in April [odds ratio {OR} (95% CI) 2.0 (1.1, 3.5) and 1.8 (1.1, 3.0), respectively] but not in November 2009 [OR (95% CI) 1.6 (0.8, 3.2) and 1.2 (0.7, 2.2), respectively]. Other resident factors, e.g. disorientation, immobility and incontinence, were not associated with antimicrobial use. Furthermore, none of the NH characteristics investigated (e.g. number of beds, hospitalization episodes, number of general practitioners, etc.) were found to be associated with antimicrobial use in either April or November 2009.

CONCLUSIONS: This study has identified a high overall rate of antimicrobial use in NHs in NI, with variability evident both within and between homes. More research is needed to understand which factors influence antimicrobial use and to determine the appropriateness of antimicrobial prescribing in this population in general and more specifically in the management of recurrent UTIs.