599 resultados para image noise modeling
Resumo:
Structural equation modeling (SEM) is a versatile multivariate statistical technique, and applications have been increasing since its introduction in the 1980s. This paper provides a critical review of 84 articles involving the use of SEM to address construction related problems over the period 1998–2012 including, but not limited to, seven top construction research journals. After conducting a yearly publication trend analysis, it is found that SEM applications have been accelerating over time. However, there are inconsistencies in the various recorded applications and several recurring problems exist. The important issues that need to be considered are examined in research design, model development and model evaluation and are discussed in detail with reference to current applications. A particularly important issue concerns the construct validity. Relevant topics for efficient research design also include longitudinal or cross-sectional studies, mediation and moderation effects, sample size issues and software selection. A guideline framework is provided to help future researchers in construction SEM applications.
Resumo:
This paper deals with constrained image-based visual servoing of circular and conical spiral motion about an unknown object approximating a single image point feature. Effective visual control of such trajectories has many applications for small unmanned aerial vehicles, including surveillance and inspection, forced landing (homing), and collision avoidance. A spherical camera model is used to derive a novel visual-predictive controller (VPC) using stability-based design methods for general nonlinear model-predictive control. In particular, a quasi-infinite horizon visual-predictive control scheme is derived. A terminal region, which is used as a constraint in the controller structure, can be used to guide appropriate reference image features for spiral tracking with respect to nominal stability and feasibility. Robustness properties are also discussed with respect to parameter uncertainty and additive noise. A comparison with competing visual-predictive control schemes is made, and some experimental results using a small quad rotor platform are given.
Resumo:
An early molecular response to DNA double-strand breaks (DSBs) is phosphorylation of the Ser-139 residue within the terminal SQEY motif of the histone H2AX1,2. This phosphorylation of H2AX is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)3. The phosphorylated form of H2AX, referred to as γH2AX, spreads to adjacent regions of chromatin from the site of the DSB, forming discrete foci, which are easily visualized by immunofluorecence microscopy3. Analysis and quantitation of γH2AX foci has been widely used to evaluate DSB formation and repair, particularly in response to ionizing radiation and for evaluating the efficacy of various radiation modifying compounds and cytotoxic compounds Given the exquisite specificity and sensitivity of this de novo marker of DSBs, it has provided new insights into the processes of DNA damage and repair in the context of chromatin. For example, in radiation biology the central paradigm is that the nuclear DNA is the critical target with respect to radiation sensitivity. Indeed, the general consensus in the field has largely been to view chromatin as a homogeneous template for DNA damage and repair. However, with the use of γH2AX as molecular marker of DSBs, a disparity in γ-irradiation-induced γH2AX foci formation in euchromatin and heterochromatin has been observed5-7. Recently, we used a panel of antibodies to either mono-, di- or tri- methylated histone H3 at lysine 9 (H3K9me1, H3K9me2, H3K9me3) which are epigenetic imprints of constitutive heterochromatin and transcriptional silencing and lysine 4 (H3K4me1, H3K4me2, H3K4me3), which are tightly correlated actively transcribing euchromatic regions, to investigate the spatial distribution of γH2AX following ionizing radiation8. In accordance with the prevailing ideas regarding chromatin biology, our findings indicated a close correlation between γH2AX formation and active transcription9. Here we demonstrate our immunofluorescence method for detection and quantitation of γH2AX foci in non-adherent cells, with a particular focus on co-localization with other epigenetic markers, image analysis and 3Dmodeling.
Resumo:
The palette of fluorescent proteins (FPs) has grown exponentially over the past decade, and as a result, live imaging of cells expressing fluorescently tagged proteins is becoming more and more mainstream. Spinning disk confocal (SDC) microscopy is a high-speed optical sectioning technique and a method of choice to observe and analyze intracellular FP dynamics at high spatial and temporal resolution. In an SDC system, a rapidly rotating pinhole disk generates thousands of points of light that scan the specimen simultaneously, which allows direct capture of the confocal image with low-noise scientific grade-cooled charge-coupled device cameras, and can achieve frame rates of up to 1000 frames per second. In this chapter, we describe important components of a state-of-the-art spinning disk system optimized for live cell microscopy and provide a rationale for specific design choices. We also give guidelines of how other imaging techniques such as total internal reflection microscopy or spatially controlled photoactivation can be coupled with SDC imaging and provide a short protocol on how to generate cell lines stably expressing fluorescently tagged proteins by lentivirus-mediated transduction.
Resumo:
It is well known that, for major infrastructure networks such as electricity, gas, railway, road, and urban water networks, disruptions at one point have a knock on effect throughout the network. There is an impressive amount of individual research projects examining the vulnerability of critical infrastructure network. However, there is little understanding of the totality of the contribution made by these projects and their interrelationships. This makes their review a difficult process for both new and existing researchers in the field. To address this issue, a two-step literature review process is used, to provide an overview of the vulnerability of the transportation network in terms of four main themes - research objective, transportation mode, disruption scenario and vulnerability indicator –involving the analysis of related articles from 2001 to 2013. Two limitations of existing research are identified: (1) the limited amount of studies relating to multi-layer transportation network vulnerability analysis, and (2) the lack of evaluation methods to explore the relationship between structure vulnerability and dynamical functional vulnerability. In addition to indicating that more attention needs to be paid to these two aspects in future, the analysis provides a new avenue for the discovery of knowledge, as well as an improved understanding of transportation network vulnerability.
Resumo:
This paper reviews the use of multi-agent systems to model the impacts of high levels of photovoltaic (PV) system penetration in distribution networks and presents some preliminary data obtained from the Perth Solar City high penetration PV trial. The Perth Solar City trial consists of a low voltage distribution feeder supplying 75 customers where 29 consumers have roof top photovoltaic systems. Data is collected from smart meters at each consumer premises, from data loggers at the transformer low voltage (LV) side and from a nearby distribution network SCADA measurement point on the high voltage side (HV) side of the transformer. The data will be used to progressively develop MAS models.
Resumo:
The low- and high-frequency components of a rustling sound, created when prey (freshly killed frog) was jerkily pulled on dry and wet sandy floors and asbestos, were recorded and played back to individual Indian false vampire bats (Megaderma lyra). Megaderma lyra responded with flight toward the speakers and captured dead frogs, that were kept as reward. The spectral peaks were at 8.6, 7.1 and 6.8 kHz for the low-frequency components of the sounds created at the dry, asbestos and wet floors, respectively. The spectral peaks for the high-frequency sounds created on the respective floors were at 36.8,27.2 and 23.3 kHz. The sound from the dry floor was more intense than that of from the other two substrata. Prey movements that generated sonic or ultrasonic sounds were both sufficient and necessary for the bats to detect and capture prey. The number of successful prey captures was significantly greater for the dry floor sound, especially to its high-frequency components. Bat-responses were low to the wet floor and moderate to the asbestos floor sounds. The bats did not respond to the sound of unrecorded parts of the tape. Even though the bats flew toward the speakers when the prey generated sounds were played back and captured the dead frogs we cannot rule out the possibility of M. lyra using echolocation to localize prey. However, the study indicates that prey that move on dry sandy floor are more vulnerable to predation by M. lyra.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
The detection of line-like features in images finds many applications in microanalysis. Actin fibers, microtubules, neurites, pilis, DNA, and other biological structures all come up as tenuous curved lines in microscopy images. A reliable tracing method that preserves the integrity and details of these structures is particularly important for quantitative analyses. We have developed a new image transform called the "Coalescing Shortest Path Image Transform" with very encouraging properties. Our scheme efficiently combines information from an extensive collection of shortest paths in the image to delineate even very weak linear features. © Copyright Microscopy Society of America 2011.
Resumo:
The application of robotics to protein crystallization trials has resulted in the production of millions of images. Manual inspection of these images to find crystals and other interesting outcomes is a major rate-limiting step. As a result there has been intense activity in developing automated algorithms to analyse these images. The very first step for most systems that have been described in the literature is to delineate each droplet. Here, a novel approach that reaches over 97% success rate and subsecond processing times is presented. This will form the seed of a new high-throughput system to scrutinize massive crystallization campaigns automatically. © 2010 International Union of Crystallography Printed in Singapore-all rights reserved.
Resumo:
Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between images acquired from different individuals or atlases. Furthermore, certain applications require this type of registration to operate across images acquired from different imaging modalities. One popular local approach for estimating this registration is a block matching procedure utilising the mutual information criterion. However, previous block matching procedures generate a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. This paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo statistical procedure to optimally select grid points of interest. Three different methods are then compared to propagate the estimated sparse deformation field to the entire image including a thin-plate spline warp, Gaussian convolution, and a hybrid fluid technique. Results show that non-rigid registration can be improved by using the proposed algorithm to optimally select grid points of interest.
Resumo:
Speech recognition in car environments has been identified as a valuable means for reducing driver distraction when operating noncritical in-car systems. Under such conditions, however, speech recognition accuracy degrades significantly, and techniques such as speech enhancement are required to improve these accuracies. Likelihood-maximizing (LIMA) frameworks optimize speech enhancement algorithms based on recognized state sequences rather than traditional signal-level criteria such as maximizing signal-to-noise ratio. LIMA frameworks typically require calibration utterances to generate optimized enhancement parameters that are used for all subsequent utterances. Under such a scheme, suboptimal recognition performance occurs in noise conditions that are significantly different from that present during the calibration session – a serious problem in rapidly changing noise environments out on the open road. In this chapter, we propose a dialog-based design that allows regular optimization iterations in order to track the ever-changing noise conditions. Experiments using Mel-filterbank noise subtraction (MFNS) are performed to determine the optimization requirements for vehicular environments and show that minimal optimization is required to improve speech recognition, avoid over-optimization, and ultimately assist with semireal-time operation. It is also shown that the proposed design is able to provide improved recognition performance over frameworks incorporating a calibration session only.
Resumo:
The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.
Resumo:
This paper proposes a linear large signal state-space model for a phase controlled CLC (Capacitor Inductor Capacitor) Resonant Dual Active Bridge (RDAB). The proposed model is useful for fast simulation and for the estimation of state variables under large signal variation. The model is also useful for control design because the slow changing dynamics of the dq variables are relatively easy to control. Simulation results of the proposed model are presented and compared to the simulated circuit model to demonstrate the proposed model's accuracy. This proposed model was used for the design of a Proportional-Integral (PI) controller and it has been implemented in the circuit simulation to show the proposed models usefulness in control design.