940 resultados para physical limits of resolution.
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
The tomato I-3 gene introgressed from the Lycopersicon pennellii accession LA716 confers resistance to race 3 of the fusarium wilt pathogen Fusarium oxysporum f. sp. lycopersici. We have improved the high-resolution map of the I-3 region of tomato chromosome 7 with the development and mapping of 31 new PCR-based markers. Recombinants recovered from L. esculentum cv. M82 × IL7-2 F2 and (IL7-2 × IL7-4) × M82 TC1F2 mapping populations, together with recombinants recovered from a previous M82 × IL7-3 F2 mapping population, were used to position these markers. A significantly higher recombination frequency was observed in the (IL7-2 × IL7-4) × M82 TC1F2 mapping population based on a reconstituted L. pennellii chromosome 7 compared to the other two mapping populations based on smaller segments of L. pennellii chromosome 7. A BAC contig consisting of L. esculentum cv. Heinz 1706 BACs covering the I-3 region has also been established. The new high-resolution map places the I-3 gene within a 0.38 cM interval between the molecular markers RGA332 and bP23/gPT with an estimated physical size of 50-60 kb. The I-3 region was found to display almost continuous microsynteny with grape chromosome 12 but interspersed microsynteny with Arabidopsis thaliana chromosomes 1, 2 and 3. An S-receptor-like kinase gene family present in the I-3 region of tomato chromosome 7 was found to be present in the microsyntenous region of grape chromosome 12 but was absent altogether from the A. thaliana genome.
Resumo:
Publicado separadamete en cada idioma
Resumo:
The development of a highly reliable physical map with landmark sites spaced an average of 100 kbp apart has been a central goal of the Human Genome Project. We have approached the physical mapping of human chromosome 11 with this goal as a primary target. We have focused on strategies that would utilize yeast artificial chromosome (YAC) technology, thus permitting long-range coverage of hundreds of kilobases of genomic DNA, yet we sought to minimize the ambiguities inherent in the use of this technology, particularly the occurrence of chimeric genomic DNA clones. This was achieved through the development of a chromosome 11-specific YAC library from a human somatic cell hybrid line that has retained chromosome 11 as its sole human component.To maximize the efficiency of YAC contig assembly and extension, we have employed an Alu-PCR-based hybridization screening system. This system eliminates many of the more costly and time-consuming steps associated with sequence tagged site content mapping such as sequencing, primer production, and hierarchical screening, resulting in greater efficiency with increased throughput and reduced cost. Using these approaches, we have achieved YAC coverage for >90% of human chromosome 11, with an average intermarker distance of <100 kbp. Cytogenetic localization has been determined for each contig by fluorescent in situ hybridization and/or sequence tagged site content. The YAC contigs that we have generated should provide a robust framework to move forward to sequence-ready templates for the sequencing efforts of the Human Genome Project as well as more focused positional cloning on chromosome 11.
Resumo:
Various types of physical mapping data were assembled by developing a set of computer programs (Integrated Mapping Package) to derive a detailed, annotated map of a 4-Mb region of human chromosome 13 that includes the BRCA2 locus. The final assembly consists of a yeast artificial chromosome (YAC) contig with 42 members spanning the 13q12-13 region and aligned contigs of 399 cosmids established by cross-hybridization between the cosmids, which were selected from a chromosome 13-specific cosmid library using inter-Alu PCR probes from the YACs. The end sequences of 60 cosmids spaced nearly evenly across the map were used to generate sequence-tagged sites (STSs), which were mapped to the YACs by PCR. A contig framework was generated by STS content mapping, and the map was assembled on this scaffold. Additional annotation was provided by 72 expressed sequences and 10 genetic markers that were positioned on the map by hybridization to cosmids.
Resumo:
The ability to carry out high-resolution genetic mapping at high throughput in the mouse is a critical rate-limiting step in the generation of genetically anchored contigs in physical mapping projects and the mapping of genetic loci for complex traits. To address this need, we have developed an efficient, high-resolution, large-scale genome mapping system. This system is based on the identification of polymorphic DNA sites between mouse strains by using interspersed repetitive sequence (IRS) PCR. Individual cloned IRS PCR products are hybridized to a DNA array of IRS PCR products derived from the DNA of individual mice segregating DNA sequences from the two parent strains. Since gel electrophoresis is not required, large numbers of samples can be genotyped in parallel. By using this approach, we have mapped > 450 polymorphic probes with filters containing the DNA of up to 517 backcross mice, potentially allowing resolution of 0.14 centimorgan. This approach also carries the potential for a high degree of efficiency in the integration of physical and genetic maps, since pooled DNAs representing libraries of yeast artificial chromosomes or other physical representations of the mouse genome can be addressed by hybridization of filter representations of the IRS PCR products of such libraries.
Resumo:
Claude Pepper, chairman of subcommittee.
Resumo:
Online technological advances are pioneering the wider distribution of geospatial information for general mapping purposes. The use of popular web-based applications, such as Google Maps, is ensuring that mapping based applications are becoming commonplace amongst Internet users which has facilitated the rapid growth of geo-mashups. These user generated creations enable Internet users to aggregate and publish information over specific geographical points. This article identifies privacy invasive geo-mashups that involve the unauthorized use of personal information, the inadvertent disclosure of personal information and invasion of privacy issues. Building on Zittrain’s Privacy 2.0, the author contends that first generation information privacy laws, founded on the notions of fair information practices or information privacy principles, may have a limited impact regarding the resolution of privacy problems arising from privacy invasive geo-mashups. Principally because geo-mashups have different patterns of personal information provision, collection, storage and use that reflect fundamental changes in the Web 2.0 environment. The author concludes by recommending embedded technical and social solutions to minimize the risks arising from privacy invasive geo-mashups that could lead to the establishment of guidelines for the general protection of privacy in geo-mashups.
Resumo:
Axial acoustic wave propagation has been widely used in evaluating the mechanical properties of human bone in vivo. However, application of this technique to monitor soft tissues, such as tendon, has received comparatively little scientific attention. Laboratory-based research has established that axial acoustic wave transmission is not only related to the physical properties of equine tendon but is also proportional to tensile load to which it is exposed (Miles et al., 1996; Pourcelot et al., 2005). The reproducibility of the technique for in vivo measurements in human tendon, however, has not been established. The aim of this study was to evaluate the limits of agreement for repeated measures of the speed of sound (SoS) in human Achilles tendon in vivo. Methods: A custom built ultrasound device, consisting of an A-mode 1MHz emitter and two regularly spaced receivers, was used to measure the SoS in the mid-portion of the Achilles tendon in ten healthy males and ten females (mean age: 33.8 years, range 23-56 yrs; height: 1.73±0.08 m; weight: 68.4±15.3 kg). The emitter and receivers were held at fixed positions by a polyethylene frame and maintained in close contact with the skin overlying the tendon by means of elasticated straps. Repeated SoS measurements were taken with the subject prone (non-weightbearing and relaxed Achilles tendon) and during quiet bipedal and unipedal stance. In each instance, the device was detached and repositioned prior to measurement. Results: Limits of agreement for repeated SoS measures during non-weightbearing and bipedal and unipedal stance were ±53, ±28 and ±21 m/s, respectively. The average SoS in the non-weightbearing Achilles tendon was 1804±198 m/s. There was a significant increase in the average SoS during bilateral (2122±135 m/s) (P < 0.05) and unilateral (2221±79 m/s) stance (P < 0.05). Conclusions: Repeated SoS measures in human Achilles tendon were more reliable during stance than under non-weightbearing conditions. These findings are consistent with previous research in equine tendon in which lower variability in SoS was observed with increasing tensile load (Crevier-Denoix et al, 2009). Since the limits of agreement for Achilles tendon SoS are nearly 5% of the changes previously observed during walking and therapeutic heel raise exercises, acoustic wave transmission provides a promising new non-invasive method for determining tendon properties during sports and rehabilitation related activities.
Resumo:
Oligomeric copper(I) clusters are formed by the insertion reaction of copper(I) aryloxides into heterocumulenes. The effect of varying the steric demands of the heterocumulene and the aryloxy group on the nuclearity of the oligomers formed has been probed. Reactions with copper(I)2-methoxyphenoxide and copper(I)2-methylphenoxide with PhNCS result in the formation of hexameric complexes hexakis[N-phenylimino(aryloxy)methanethiolato copper(I)] 3 and 4 respectively. Single crystal X-ray data confirmed the structure of 3. Similar insertion reactions of CS2 with the copper(I) aryloxides formed by 2,6-di-tert-butyl-4-methylphenol and 2,6-dimethylphenol result in oligomeric copper(I) complexes 7 and 8 having the (aryloxy)thioxanthate ligand. Complex 7 was confirmed to be a tetramer from single crystal X-ray crystallography. Reactions carried out with 2-mercaptopyrimidine, which has ligating properties similar to N-alkylimino(aryloxy)methanethiolate, result in the formation of an insoluble polymeric complex 11. The fluorescence spectra of oligomeric complexes are helpful in determining their nuclearity. Ir has been shown that a decrease in the steric requirements of either the heterocumulene or aryloxy parts of the ligand can compensate for steric constraints acid facilitate oligomerization. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.
Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.
We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM∗ with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.
By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.
Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.
A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.
Resumo:
The adoption of inclusive design principles and methods in the design practice is meant to support the equity of use of everyday products by as many people as possible independently of their age, physical, sensorial and cognitive capabilities. Although the intention is highly valuable, inclusive design approaches have not been widely applied in industrial context. This paper analyses the findings of an empirical research conducted with industrial designers and product managers. The research indicates some of the hindrances to the adoption of inclusive design, such as the current way the market is considered and targeted, and; the way the designers are driven by the project's brief and budget to orient their research strategy and activities. The paper proposes a way to improve the current industrial mode by strategically supplying clients, designers or both together with information about inclusivity. © 2013 Taylor & Francis Group.
Resumo:
We present a study of the nebular phase spectra of a sample of Type II-Plateau supernovae with identified progenitors or restrictive limits. The evolution of line fluxes, shapes and velocities is compared within the sample, and interpreted by the use of a spectral synthesis code. The small diversity within the data set can be explained by strong mixing occurring during the explosion, and by recognizing that most lines have significant contributions from primordial metals in the H envelope, which dominates the total ejecta mass in these types of objects. In particular, when using the [O I] 6300, 6364 Å doublet for estimating the core mass of the star, care has to be taken to account for emission from primordial O in the envelope. Finally, a correlation between the Hα line width and the mass of 56Ni is presented, suggesting that higher energy explosions are associated with higher 56Ni production.