921 resultados para non-uniform


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le superfici di suddivisione sono un ottimo ed importante strumento utilizzato principalmente nell’ambito dell’animazione 3D poichè consentono di definire superfici di forma arbitraria. Questa tecnologia estende il concetto di B-spline e permette di avere un’estrema libertà dei vincoli topologici. Per definire superfici di forma arbitraria esistono anche le Non-Uniform Rational B-Splines (NURBS) ma non lasciano abbastanza libertà per la costruzione di forme libere. Infatti, a differenza delle superfici di suddivisione, hanno bisogno di unire vari pezzi della superficie (trimming). La tecnologia NURBS quindi viene utilizzata prevalentemente negli ambienti CAD mentre nell’ambito della Computer Graphics si è diffuso ormai da più di 30 anni, l’utilizzo delle superfici di suddivisione. Lo scopo di questa tesi è quello di riassumere, quindi, i concetti riguardo questa tecnologia, di analizzare alcuni degli schemi di suddivisione più utilizzati e parlare brevemente di come questi schemi ed algoritmi vengono utilizzati nella realt`a per l’animazione 3D.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this study is to assess the structural and cross-cultural validity of the KIDSCREEN-27 questionnaire. METHODS: The 27-item version of the KIDSCREEN instrument was derived from a longer 52-item version and was administered to young people aged 8-18 years in 13 European countries in a cross-sectional survey. Structural and cross-cultural validity were tested using multitrait multi-item analysis, exploratory and confirmatory factor analysis, and Rasch analyses. Zumbo's logistic regression method was applied to assess differential item functioning (DIF) across countries. Reliability was assessed using Cronbach's alpha. RESULTS: Responses were obtained from n = 22,827 respondents (response rate 68.9%). For the combined sample from all countries, exploratory factor analysis with procrustean rotations revealed a five-factor structure which explained 56.9% of the variance. Confirmatory factor analysis indicated an acceptable model fit (RMSEA = 0.068, CFI = 0.960). The unidimensionality of all dimensions was confirmed (INFIT: 0.81-1.15). Differential item functioning (DIF) results across the 13 countries showed that 5 items presented uniform DIF whereas 10 displayed non-uniform DIF. Reliability was acceptable (Cronbach's alpha = 0.78-0.84 for individual dimensions). CONCLUSIONS: There was substantial evidence for the cross-cultural equivalence of the KIDSCREEN-27 across the countries studied and the factor structure was highly replicable in individual countries. Further research is needed to correct scores based on DIF results. The KIDSCREEN-27 is a new short and promising tool for use in clinical and epidemiological studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Skeletal muscle force evaluation is difficult to implement in a clinical setting. Muscle force is typically assessed through either manual muscle testing, isokinetic/isometric dynamometry, or electromyography (EMG). Manual muscle testing is a subjective evaluation of a patient’s ability to move voluntarily against gravity and to resist force applied by an examiner. Muscle testing using dynamometers adds accuracy by quantifying functional mechanical output of a limb. However, like manual muscle testing, dynamometry only provides estimates of the joint moment. EMG quantifies neuromuscular activation signals of individual muscles, and is used to infer muscle function. Despite the abundance of work performed to determine the degree to which EMG signals and muscle forces are related, the basic problem remains that EMG cannot provide a quantitative measurement of muscle force. Intramuscular pressure (IMP), the pressure applied by muscle fibers on interstitial fluid, has been considered as a correlate for muscle force. Numerous studies have shown that an approximately linear relationship exists between IMP and muscle force. A microsensor has recently been developed that is accurate, biocompatible, and appropriately sized for clinical use. While muscle force and pressure have been shown to be correlates, IMP has been shown to be non-uniform within the muscle. As it would not be practicable to experimentally evaluate how IMP is distributed, computational modeling may provide the means to fully evaluate IMP generation in muscles of various shapes and operating conditions. The work presented in this dissertation focuses on the development and validation of computational models of passive skeletal muscle and the evaluation of their performance for prediction of IMP. A transversly isotropic, hyperelastic, and nearly incompressible model will be evaluated along with a poroelastic model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nanoparticles are fascinating where physical and optical properties are related to size. Highly controllable synthesis methods and nanoparticle assembly are essential [6] for highly innovative technological applications. Among nanoparticles, nonhomogeneous core-shell nanoparticles (CSnp) have new properties that arise when varying the relative dimensions of the core and the shell. This CSnp structure enables various optical resonances, and engineered energy barriers, in addition to the high charge to surface ratio. Assembly of homogeneous nanoparticles into functional structures has become ubiquitous in biosensors (i.e. optical labeling) [7, 8], nanocoatings [9-13], and electrical circuits [14, 15]. Limited nonhomogenous nanoparticle assembly has only been explored. Many conventional nanoparticle assembly methods exist, but this work explores dielectrophoresis (DEP) as a new method. DEP is particle polarization via non-uniform electric fields while suspended in conductive fluids. Most prior DEP efforts involve microscale particles. Prior work on core-shell nanoparticle assemblies and separately, nanoparticle characterizations with dielectrophoresis and electrorotation [2-5], did not systematically explore particle size, dielectric properties (permittivity and electrical conductivity), shell thickness, particle concentration, medium conductivity, and frequency. This work is the first, to the best of our knowledge, to systematically examine these dielectrophoretic properties for core-shell nanoparticles. Further, we conduct a parametric fitting to traditional core-shell models. These biocompatible core-shell nanoparticles were studied to fill a knowledge gap in the DEP field. Experimental results (chapter 5) first examine medium conductivity, size and shell material dependencies of dielectrophoretic behaviors of spherical CSnp into 2D and 3D particle-assemblies. Chitosan (amino sugar) and poly-L-lysine (amino acid, PLL) CSnp shell materials were custom synthesized around a hollow (gas) core by utilizing a phospholipid micelle around a volatile fluid templating for the shell material; this approach proves to be novel and distinct from conventional core-shell models wherein a conductive core is coated with an insulative shell. Experiments were conducted within a 100 nl chamber housing 100 um wide Ti/Au quadrapole electrodes spaced 25 um apart. Frequencies from 100kHz to 80MHz at fixed local field of 5Vpp were tested with 10-5 and 10-3 S/m medium conductivities for 25 seconds. Dielectrophoretic responses of ~220 and 340(or ~400) nm chitosan or PLL CSnp were compiled as a function of medium conductivity, size and shell material.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Central Switzerland lies tectonically in an intraplate area and recurrence rates of strong earthquakes exceed the time span covered by historic chronicles. However, many lakes are present in the area that act as natural seismographs: their continuous, datable and high-resolution sediment succession allows extension of the earthquake catalogue to pre-historic times. This study reviews and compiles available data sets and results from more than 10 years of lacustrine palaeoseismological research in lakes of northern and Central Switzerland. The concept of using lacustrine mass-movement event stratigraphy to identify palaeo-earthquakes is showcased by presenting new data and results from Lake Zurich. The Late Glacial to Holocene mass-movement units in this lake document a complex history of varying tectonic and environmental impacts. Results include sedimentary evidence of three major and three minor, simultaneously triggered basin-wide lateral slope failure events interpreted as the fingerprints of palaeoseismic activity. A refined earthquake catalogue, which includes results from previous lake studies, reveals a non-uniform temporal distribution of earthquakes in northern and Central Switzerland. A higher frequency of earthquakes in the Late Glacial and Late Holocene period documents two different phases of neotectonic activity; they are interpreted to be related to isostatic post-glacial rebound and relatively recent (re-)activation of seismogenic zones, respectively. Magnitudes and epicentre reconstructions for the largest identified earthquakes provide evidence for two possible earthquake sources: (i) a source area in the region of the Alpine or Sub-Alpine Front due to release of accumulated north-west/south-east compressional stress related to an active basal thrust beneath the Aar massif; and (ii) a source area beneath the Alpine foreland due to reactivation of deep-seated strike-slip faults. Such activity has been repeatedly observed instrumentally, for example, during the most recent magnitude 4.2 and 3.5 earthquakes of February 2012, near Zug. The combined lacustrine record from northern and Central Switzerland indicates that at least one of these potential sources has been capable of producing magnitude 6.2 to 6.7 events in the past.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hypothesis and Objectives PEGylated liposomal blood pool contrast agents maintain contrast enhancement over several hours. This study aimed to evaluate (long-term) imaging of pulmonary arteries, comparing conventional iodinated contrast with a liposomal blood pool contrast agent. Secondly, visualization of the (real-time) therapeutic effects of tissue-Plasminogen Activator (t-PA) on pulmonary embolism (PE) was attempted. Materials and Methods Six rabbits (approximate 4 kg weight) had autologous blood clots injected through the superior vena cava. Imaging was performed using conventional contrast (iohexol, 350 mg I/ml, GE HealthCare, Princeton, NJ) at a dose of 1400 mgI per animal and after wash-out, animals were imaged using an iodinated liposomal blood pool agent (88 mg I/mL, dose 900 mgI/animal). Subsequently, five animals were injected with 2mg t-PA and imaging continued for up to 4 ½ hours. Results Both contrast agents identified PE in the pulmonary trunk and main pulmonary arteries in all rabbits. Liposomal blood pool agent yielded uniform enhancement, which remained relatively constant throughout the experiments. Conventional agents exhibited non uniform opacification and rapid clearance post injection. Three out of six rabbits had mistimed bolus injections, requiring repeat injections. Following t-PA, Pulmonary embolus volume (central to segmental) decreased in four of five treated rabbits (range 10–57%, mean 42%). One animal showed no response to t-PA. Conclusions Liposomal blood pool agents effectively identified acute PE without need for re-injection. PE resolution following t-PA was quantifiable over several hours. Blood pool agents offer the potential for repeated imaging procedures without need for repeated (nephrotoxic) contrast injections

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current methods for detection of copy number variants (CNV) and aberrations (CNA) from targeted sequencing data are based on the depth of coverage of captured exons. Accurate CNA determination is complicated by uneven genomic distribution and non-uniform capture efficiency of targeted exons. Here we present CopywriteR, which eludes these problems by exploiting 'off-target' sequence reads. CopywriteR allows for extracting uniformly distributed copy number information, can be used without reference, and can be applied to sequencing data obtained from various techniques including chromatin immunoprecipitation and target enrichment on small gene panels. CopywriteR outperforms existing methods and constitutes a widely applicable alternative to available tools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Methods Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Results Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. Conclusions The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Next-generation sequencing (NGS) technology has become a prominent tool in biological and biomedical research. However, NGS data analysis, such as de novo assembly, mapping and variants detection is far from maturity, and the high sequencing error-rate is one of the major problems. . To minimize the impact of sequencing errors, we developed a highly robust and efficient method, MTM, to correct the errors in NGS reads. We demonstrated the effectiveness of MTM on both single-cell data with highly non-uniform coverage and normal data with uniformly high coverage, reflecting that MTM’s performance does not rely on the coverage of the sequencing reads. MTM was also compared with Hammer and Quake, the best methods for correcting non-uniform and uniform data respectively. For non-uniform data, MTM outperformed both Hammer and Quake. For uniform data, MTM showed better performance than Quake and comparable results to Hammer. By making better error correction with MTM, the quality of downstream analysis, such as mapping and SNP detection, was improved. SNP calling is a major application of NGS technologies. However, the existence of sequencing errors complicates this process, especially for the low coverage (

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El índice de erosividad (EI30) y su espacialización fueron determinados para las cuencas de contribución del sistema hidroeléctrico de la reserva Cachoeira Dourada, localizada entre los Estados de Goias y Minas Gerais, limitada por las coordenadas 640000-760000 m W. y 7910000-7975000 m S. UTM zona 22, Datum Córrego Alegre. Se trataron los datos del promedio mensual y anual de las precipitaciones correspondientes a ocho localidades para un periodo de treinta años. Existe una distribución irregular de precipitación en la región y en consecuencia una espacialización no uniforme de los índices de erosión en el área de influencia de la reserva. Los valores más altos de precipitación coinciden con el periodo de preparación de la tierra para el cultivo y el desarrollo de las plantas de ciclo anual, principalmente soja y maíz.