987 resultados para ARTIFICIAL MULTIPLE TETRAPLOID
Learning new articulator trajectories for a speech production model using artificial neural networks
Resumo:
The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.
The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.
The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.
The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.
If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
A specklegram in a multimode fiber (MMF) has successfully been used as a sensor for detecting external disturbance. Our experiments showed that the sensitivity in the sensor with a multiple longitudinal-mode laser as its source was much higher than that with a single longitudinal-mode laser. In addition, the near-field pattern observations indicated that the coupling between different transverse modes in the MMF is quite weak. Based on the experimental results, a theoretical model for the speckle formation is proposed, taking a bend-caused phase factor into consideration. It is shown in the theoretical analysis that the interferences between different longitudinal modes make a larger contribution to the specklegram signals. (C) 2007 Optical Society of America.
Resumo:
Specklegram in multimode fiber has successfully been used as a sensor for detecting mechanical disturbance. Speckles in a multimode pure silica grapefruit fiber are observed and compared to that of a step-index multimode fiber, showing different features between them. The sensitivities to external disturbance of two kinds of fiber were measured, based on single-multiple-single mode (SMS) fiber structure. Experimental results show that the grapefruit fiber shows higher sensitivity than does the step-index multimode fiber. The transmission spectrum of the grapefruit fiber was measured as well, showing some oscillation features that are significantly different from that of a step-index multimode fiber. The experiments may provide suggestions to understand the mechanisms of light propagation in grapefruit fibers. (D 2008 Optical Society of America.
Resumo:
Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.
In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.
The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.
The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.
Resumo:
Dosidicus gigas is a large pelagic cephalopod of the eastern Pacific that has recently undergone an unexpected, significant range expansion up the coast of North America. The impact that such a range expansion is expected to have on local fisheries and marine ecosystems has motivated a thorough study of this top predator, a squid whose lifestyle has been quite mysterious until recently. Unfortunately, Dosidicus spends daylight hours at depths prohibitive to making observations without significant artificial interference. Observations of this squid‟s natural behaviors have thus far been considerably limited by the bright illumination and loud noises of remotely-operated-vehicles, or else the presence of humans from boats or with SCUBA. However, recent technological innovations have allowed for observations to take place in the absence of humans, or significant human intrusion, through the use of animal-borne devices such as National Geographic‟s CRITTERCAM. Utilizing the advanced video recording and data logging technology of this device, this study seeks to characterize unknown components of Dosidicus gigas behavior at depth. Data from two successful CRITTERCAM deployments reveal an assortment of new observations concerning Dosidicus lifestyle. Tri-axial accelerometers enable a confident description of Dosidicus orientation during ascents, descents, and depth maintenance behavior - previously not possible with simple depth tags. Video documentation of intraspecific interactions between Dosidicus permits the identification of ten chromatic components, a previously undescribed basal chromatic behavior, and multiple distinct body postures. And finally, based on visualizations of spermatophore release by D. gigas and repetitive behavior patterns between squid pairs, this thesis proposes the existence of a new mating behavior in Dosidicus. This study intends to provide the first glimpse into the natural behavior of Dosidicus, establishing the groundwork for a comprehensive ethogram to be supported with data from future CRITTERCAM deployments. Cataloguing these behaviors will be useful in accounting for Dosidicus‟ current range expansion in the northeast Pacific, as well as to inform public interest in the impacts this expansion will have on local fisheries and marine ecosystems.
Resumo:
Hatchling American Alligators (Alligator mississippiensis) produced from artificially incubated wild eggs were returned to their natal areas (repatriated). We compared artificially incubated and repatriated hatchlings released within and outside the maternal alligator’s home range with naturally incubated hatchlings captured and released within the maternal alligator’s home range on Lake Apopka, Lake Griffin, and Orange Lake in Florida. We used probability of recapture and total length at approximately nine months after hatching as indices of survival and growth rates. Artificially incubated hatchlings released outside of the maternal alligator’s home range had lower recapture probabilities than either naturally incubated hatchlings or artificially incubated hatchlings released near the original nest site. Recapture probabilities of other treatments did not differ significantly. Artificially incubated hatchlings were approximately 6% shorter than naturally incubated hatchlings at approximately nine months after hatching. We concluded that repatriation of hatchlings probably would not have long-term effects on populations because of the resiliency of alligator populations to alterations of early age-class survival and growth rates of the magnitude that we observed. Repatriation of hatchlings may be an economical alternative to repatriation of older juveniles for population restoration. However, the location of release may affect subsequent survival and growth.