886 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
Resumo:
2000 Mathematics Subject Classification: 35L15, Secondary 35L30.
Resumo:
A numerical method based on integral equations is proposed and investigated for the Cauchy problem for the Laplace equation in 3-dimensional smooth bounded doubly connected domains. To numerically reconstruct a harmonic function from knowledge of the function and its normal derivative on the outer of two closed boundary surfaces, the harmonic function is represented as a single-layer potential. Matching this representation against the given data, a system of boundary integral equations is obtained to be solved for two unknown densities. This system is rewritten over the unit sphere under the assumption that each of the two boundary surfaces can be mapped smoothly and one-to-one to the unit sphere. For the discretization of this system, Weinert’s method (PhD, Göttingen, 1990) is employed, which generates a Galerkin type procedure for the numerical solution, and the densities in the system of integral equations are expressed in terms of spherical harmonics. Tikhonov regularization is incorporated, and numerical results are included showing the efficiency of the proposed procedure.
Resumo:
The problem investigated was negative effects on the ability of a university student to successfully complete a course in religious studies resulting from conflict between the methodologies and objectives of religious studies and the student's system of beliefs. Using Festinger's theory of cognitive dissonance as a theoretical framework, it was hypothesized that completing a course with a high level of success would be negatively affected by (1) failure to accept the methodologies and objectives of religious studies (methodology), (2) holding beliefs about religion that had potential conflicts with the methodologies and objectives (beliefs), (3) extrinsic religiousness, and (4) dogmatism. The causal comparative method was used. The independent variables were measured with four scales employing Likert-type items. An 8-item scale to measure acceptance of the methodologies and objectives of religious studies and a 16-item scale to measure holding of beliefs about religion having potential conflict with the methodologies were developed for this study. These scales together with a 20-item form of Rokeach's Dogmatism Scale and Feagin's 12-item Religious Orientation Scale to measure extrinsic religiousness were administered to 144 undergraduate students enrolled in randomly selected religious studies courses at Florida International University. Level of success was determined by course grade with the 27% of students receiving the highest grades classified as highly successful and the 27% receiving the lowest grades classified as not highly successful. A stepwise discriminant analysis produced a single significant function with methodology and dogmatism as the discriminants. Methodology was the principal discriminating variable. Beliefs and extrinsic religiousness failed to discriminate significantly. It was concluded that failing to accept the methodologies and objectives of religious studies and being highly dogmatic have significant negative effects on a student's success in a religious studies course. Recommendations were made for teaching to diminish these negative effects.
Resumo:
Today, many organizations are turning to new approaches to building and maintaining information systems (I/S) to cope with a highly competitive business environment. Current anecdotal evidence indicates that the approaches being used improve the effectiveness of software development by encouraging active user participation throughout the development process. Unfortunately, very little is known about how the use of such approaches enhances the ability of team members to develop I/S that are responsive to changing business conditions.^ Drawing from predominant theories of organizational conflict, this study develops and tests a model of conflict among members of a development team. The model proposes that development approaches provide the relevant context conditioning the management and resolution of conflict in software development which, in turn, are crucial for the success of the development process.^ Empirical testing of the model was conducted using data collected through a combination of interviews with I/S executives and surveys of team members and business users at nine organizations. Results of path analysis provide support for the model's main prediction that integrative conflict management and distributive conflict management can contribute to I/S success by influencing differently the manifestation and resolution of conflict in software development. Further, analyses of variance indicate that object-oriented development, when compared to rapid and structured development, appears to produce the lowest levels of conflict management, conflict resolution, and I/S success.^ The proposed model and findings suggest academic implications for understanding the effects of different conflict management behaviors on software development outcomes, and practical implications for better managing the software development process, especially in user-oriented development environments. ^
Resumo:
This dissertation develops a process improvement method for service operations based on the Theory of Constraints (TOC), a management philosophy that has been shown to be effective in manufacturing for decreasing WIP and improving throughput. While TOC has enjoyed much attention and success in the manufacturing arena, its application to services in general has been limited. The contribution to industry and knowledge is a method for improving global performance measures based on TOC principles. The method proposed in this dissertation will be tested using discrete event simulation based on the scenario of the service factory of airline turnaround operations. To evaluate the method, a simulation model of aircraft turn operations of a U.S. based carrier was made and validated using actual data from airline operations. The model was then adjusted to reflect an application of the Theory of Constraints for determining how to deploy the scarce resource of ramp workers. The results indicate that, given slight modifications to TOC terminology and the development of a method for constraint identification, the Theory of Constraints can be applied with success to services. Bottlenecks in services must be defined as those processes for which the process rates and amount of work remaining are such that completing the process will not be possible without an increase in the process rate. The bottleneck ratio is used to determine to what degree a process is a constraint. Simulation results also suggest that redefining performance measures to reflect a global business perspective of reducing costs related to specific flights versus the operational local optimum approach of turning all aircraft quickly results in significant savings to the company. Savings to the annual operating costs of the airline were simulated to equal 30% of possible current expenses for misconnecting passengers with a modest increase in utilization of the workers through a more efficient heuristic of deploying them to the highest priority tasks. This dissertation contributes to the literature on service operations by describing a dynamic, adaptive dispatch approach to manage service factory operations similar to airline turnaround operations using the management philosophy of the Theory of Constraints.
Resumo:
Hospitals and healthcare facilities in the United States are facing serious shortages of medical laboratory personnel, which, if not addressed, stand to negatively impact patient care. The problem is compounded by a reduction in the numbers of academic programs and resulting decrease in the number of graduates to keep up with the increase in industry demands. Given these challenges, the purpose of this study was to identify predictors of success for students in a selected 2-year Medical Laboratory Technology Associate in Science Degree Program. ^ This study examined five academic factors (College Placement Test Math and Reading scores, Cumulative GPA, Science GPA, and Professional [first semester laboratory courses] GPA) and, demographic data to see if any of these factors could predict program completion. The researcher examined academic records for a 10-year period (N =158). Using a retrospective model, the correlational analysis between the variables and completion revealed a significant relationship (p < .05) for CGPA, SGPA, CPT Math, and PGPA indicating that students with higher CGPA, SGPA, CPT Math, and PGPA were more likely to complete their degree in 2 years. Binary logistic regression analysis with the same academic variables revealed PGPA was the best predictor of program completion (p < .001). ^ Additionally, the findings in this study are consistent with the academic part of the Bean and Metzner Conceptual Model of Nontraditional Student Attrition which points to academic outcome variables such as GPA as affecting attrition. Thus, the findings in this study are important to students and educators in the field of Medical Laboratory Technology since PGPA is a predictor that can be used to provide early in-program intervention to the at-risk student, thus increasing the chances of successful timely completion.^
Resumo:
A high-resolution multiparameter stratigraphy allows the identification of late Quaternary glacial and interglacial cycles in a central Arctic Ocean sediment core. Distinct sandy layers in the upper part of the otherwise fine-grained sediment core from the Lomonosov Ridge (lat 87.5°N) correlate to four major glacials since ca. 0.7 Ma. The composition of these ice-rafted terrigenous sediments points to a glaciated northern Siberia as the main source. In contrast, lithic carbonates derived from North America are also present in older sediments and indicate a northern North American glaciation since at least 2.8 Ma. We conclude that large-scale northern Siberian glaciation began much later than other Northern Hemisphere ice sheets.
Resumo:
The sensitivity of the tropics to climate change, particularly the amplitude of glacial-to-interglacial changes in sea surface temperature (SST), is one of the great controversies in paleoclimatology. Here we reassess faunal estimates of ice age SSTs, focusing on the problem of no-analog planktonic foraminiferal assemblages in the equatorial oceans that confounds both classical transfer function and modern analog methods. A new calibration strategy developed here, which uses past variability of species to define robust faunal assemblages, solves the no-analog problem and reveals ice age cooling of 5° to 6°C in the equatorial current systems of the Atlantic and eastern Pacific Oceans. Classical transfer functions underestimated temperature changes in some areas of the tropical oceans because core-top assemblages misrepresented the ice age faunal assemblages. Our finding is consistent with some geochemical estimates and model predictions of greater ice age cooling in the tropics than was inferred by Climate: Long-Range Investigation, Mapping, and Prediction (CLIMAP) [1981] and thus may help to resolve a long-standing controversy. Our new foraminiferal transfer function suggests that such cooling was limited to the equatorial current systems, however, and supports CLIMAP's inference of stability of the subtropical gyre centers.
Resumo:
In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.
Resumo:
In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
Resumo:
This paper presents the summary of the key objectives, instrumentation and logistic details, goals, and initial scientific findings of the European Marie Curie Action SAPUSS project carried out in the western Mediterranean Basin (WMB) during September-October in autumn 2010. The key SAPUSS objective is to deduce aerosol source characteristics and to understand the atmospheric processes responsible for their generations and transformations - both horizontally and vertically in the Mediterranean urban environment. In order to achieve so, the unique approach of SAPUSS is the concurrent measurements of aerosols with multiple techniques occurring simultaneously in six monitoring sites around the city of Barcelona (NE Spain): a main road traffic site, two urban background sites, a regional background site and two urban tower sites (150 m and 545 m above sea level, 150 m and 80 m above ground, respectively). SAPUSS allows us to advance our knowledge sensibly of the atmospheric chemistry and physics of the urban Mediterranean environment. This is well achieved only because of both the three dimensional spatial scale and the high sampling time resolution used. During SAPUSS different meteorological regimes were encountered, including warm Saharan, cold Atlantic, wet European and stagnant regional ones. The different meteorology of such regimes is herein described. Additionally, we report the trends of the parameters regulated by air quality purposes (both gaseous and aerosol mass concentrations); and we also compare the six monitoring sites. High levels of traffic-related gaseous pollutants were measured at the urban ground level monitoring sites, whereas layers of tropospheric ozone were recorded at tower levels. Particularly, tower level night-time average ozone concentrations (80 +/- 25 mu g m(-3)) were up to double compared to ground level ones. The examination of the vertical profiles clearly shows the predominant influence of NOx on ozone concentrations, and a source of ozone aloft. Analysis of the particulate matter (PM) mass concentrations shows an enhancement of coarse particles (PM2.5-10) at the urban ground level (+64 %, average 11.7 mu g m(-3)) but of fine ones (PM1) at urban tower level (+28 %, average 14.4 mu g m(-3)). These results show complex dynamics of the size-resolved PM mass at both horizontal and vertical levels of the study area. Preliminary modelling findings reveal an underestimation of the fine accumulation aerosols. In summary, this paper lays the foundation of SAPUSS, an integrated study of relevance to many other similar urban Mediterranean coastal environment sites.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Testing for two-sample differences is challenging when the differences are local and only involve a small portion of the data. To solve this problem, we apply a multi- resolution scanning framework that performs dependent local tests on subsets of the sample space. We use a nested dyadic partition of the sample space to get a collection of windows and test for sample differences within each window. We put a joint prior on the states of local hypotheses that allows both vertical and horizontal message passing among the partition tree to reflect the spatial dependency features among windows. This information passing framework is critical to detect local sample differences. We use both the loopy belief propagation algorithm and MCMC to get the posterior null probability on each window. These probabilities are then used to report sample differences based on decision procedures. Simulation studies are conducted to illustrate the performance. Multiple testing adjustment and convergence of the algorithms are also discussed.
Resumo:
This thesis details the top-down fabrication of nanostructures on Si and Ge substrates by electron beam lithography (EBL). Various polymeric resist materials were used to create nanopatterns by EBL and Chapter 1 discusses the development characteristics of these resists. Chapter 3 describes the processing parameters, resolution and topographical and structural changes of a new EBL resist known as ‘SML’. A comparison between SML and the standard resists PMMA and ZEP520A was undertaken to determine the suitability of SML as an EBL resist. It was established that SML is capable of high-resolution patterning and showed good pattern transfer capabilities. Germanium is a desirable material for use in microelectronic applications due to a number of superior qualities over silicon. EBL patterning of Ge with high-resolution hydrogen silsesquioxane (HSQ) resist is however difficult due to the presence of native surface oxides. Thus, to combat this problem a new technique for passivating Ge surfaces prior to EBL processes is detailed in Chapter 4. The surface passivation was carried out using simple acids like citric acid and acetic acid. The acids were gentle on the surface and enabled the formation of high-resolution arrays of Ge nanowires using HSQ resist. Chapter 5 details the directed self-assembly (DSA) of block copolymers (BCPs) on EBL patterned Si and, for the very first time, Ge surfaces. DSA of BCPs on template substrates is a promising technology for high volume and cost effective nanofabrication. The BCP employed for this study was poly (styrene-b-ethylene oxide) and the substrates were pre-defined by HSQ templates produced by EBL. The DSA technique resulted into pattern rectification (ordering in BCP) and in pattern multiplication within smaller areas.