918 resultados para Residence Time Distributions
Resumo:
The presented doctoral research utilizes time-resolved spectroscopy to characterize protein dynamics and folding mechanisms. We resolve millisecond-timescale folding by coupling time-resolved fluorescence energy transfer (trFRET) to a continuous flow microfluidic mixer to obtain intramolecular distance distributions throughout the folding process. We have elucidated the folding mechanisms of two cytochromes---one that exhibits two-state folding (cytochrome
We have also investigated intrachain contact dynamics in unfolded cytochrome
In addition, we have explored the pathway dependence of electron tunneling rates between metal sites in proteins. Our research group has converted cytochrome
Resumo:
The contributions of hematological factors to the distribution and estimations of Eustrongylides africanus larvae densities in Clarias gariepinus and C. anguillaris of Bida floodplain of Nigeria were documented for the first time. The hematological factors making the most important contributions to the distributions of E. africanus larvae infections in clarias species are mean corpuscular haemoglobin concentration (MCHC), mean corpuscular haemoglobin (MCH), mean corpuscular volume (MCV) and neutrophils count, in descending order of magnitude; having the manifestations for the months of January, March, September, and December of the year being closely related. Five haematological factors (neutrophils, lymphocytes and eosinophils counts; MCH and MCV) having positive or negative correlation coefficient (r) between 0.50 and 0.85 contributed to the estimated of E.africanus larvae densities in the wild population of Clarias species
Resumo:
The single ionization of an He atom by intense linearly polarized laser field in the tunneling regime is studied by S- matrix theory. When only the first term of the expansion of the S matrix is considered and time, spatial distribution, and fluctuation of the laser pulse are taken into account, the obtained momentum distribution in the polarization direction of laser field is consistent with the semiclassical calculation, which only considers tunneling and the interaction between the free electron and external field. When the second term, which includes the interaction between the core and the free electron, is considered, the momentum distribution shows a complex multipeak structure with the central minimum and the positions of some peaks are independent of the intensity in some intensity regime, which is consistent with the recent experimental result. Based on our analysis, we found that the structures observed in the momentum distribution of an He atom are attributed to the " soft" collision of the tunneled electron with the core.
Resumo:
In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.
In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.
Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.
In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.
Resumo:
The primary objective of this study was to predict the distribution of mesophotic hard corals in the Au‘au Channel in the Main Hawaiian Islands (MHI). Mesophotic hard corals are light-dependent corals adapted to the low light conditions at approximately 30 to 150 m in depth. Several physical factors potentially influence their spatial distribution, including aragonite saturation, alkalinity, pH, currents, water temperature, hard substrate availability and the availability of light at depth. Mesophotic corals and mesophotic coral ecosystems (MCEs) have increasingly been the subject of scientific study because they are being threatened by a growing number of anthropogenic stressors. They are the focus of this spatial modeling effort because the Hawaiian Islands Humpback Whale National Marine Sanctuary (HIHWNMS) is exploring the expansion of its scope—beyond the protection of the North Pacific Humpback Whale (Megaptera novaeangliae)—to include the conservation and management of these ecosystem components. The present study helps to address this need by examining the distribution of mesophotic corals in the Au‘au Channel region. This area is located between the islands of Maui, Lanai, Molokai and Kahoolawe, and includes parts of the Kealaikahiki, Alalākeiki and Kalohi Channels. It is unique, not only in terms of its geology, but also in terms of its physical oceanography and local weather patterns. Several physical conditions make it an ideal place for mesophotic hard corals, including consistently good water quality and clarity because it is flushed by tidal currents semi-diurnally; it has low amounts of rainfall and sediment run-off from the nearby land; and it is largely protected from seasonally strong wind and wave energy. Combined, these oceanographic and weather conditions create patches of comparatively warm, calm, clear waters that remain relatively stable through time. Freely available Maximum Entropy modeling software (MaxEnt 3.3.3e) was used to create four separate maps of predicted habitat suitability for: (1) all mesophotic hard corals combined, (2) Leptoseris, (3) Montipora and (4) Porites genera. MaxEnt works by analyzing the distribution of environmental variables where species are present, so it can find other areas that meet all of the same environmental constraints. Several steps (Figure 0.1) were required to produce and validate four ensemble predictive models (i.e., models with 10 replicates each). Approximately 2,000 georeferenced records containing information about mesophotic coral occurrence and 34 environmental predictors describing the seafloor’s depth, vertical structure, available light, surface temperature, currents and distance from shoreline at three spatial scales were used to train MaxEnt. Fifty percent of the 1,989 records were randomly chosen and set aside to assess each model replicate’s performance using Receiver Operating Characteristic (ROC), Area Under the Curve (AUC) values. An additional 1,646 records were also randomly chosen and set aside to independently assess the predictive accuracy of the four ensemble models. Suitability thresholds for these models (denoting where corals were predicted to be present/absent) were chosen by finding where the maximum number of correctly predicted presence and absence records intersected on each ROC curve. Permutation importance and jackknife analysis were used to quantify the contribution of each environmental variable to the four ensemble models.
Resumo:
In trawl surveys a cluster of fish are caught at each station, and fish caught together tend to have more similar characteristics, such as length, age, stomach contents etc., than those in the entire population. When this is the case, the effective sample size for estimates of the frequency distribution of a population characteristic can, therefore, be much smaller than the number of fish sampled during a survey. As examples, it is shown that the effective sample size for estimates of length-frequency distributions generated by trawl surveys conducted in the Barents Sea, off Namibia, and off South Africa is on average approximately one fish per tow. Thus many more fish than necessary are measured at each station (location). One way to increase the effective sample size for these surveys and, hence, increase the precision of the length-frequency estimates, is to reduce tow duration and use the time saved to collect samples at more stations.
Resumo:
Unequivocal Eocene suckers from China are for the first time reported here. This discovery demonstrates that catostomids of the Eocene Epoch (some 55-35 Ma ago) are scattered widely on mainland Asia as well as western North America. The present day disjunct distribution pattern of catostomids, with 68 extant species widespread in North America and the northern part of Middle America and only two in the restricted areas of Asia, is the result of their post-Eocene decline in Asia due to the competitive pressure from cyprinids, their Late Cenozoic radiation in North America, and the vicariant and dispersal events triggered by the changed biogeographic landscape. All of these prove to be a historical product of the geological, biological, and climatic changes throughout the Cenozoic.
Resumo:
Modes in a microsquare resonator slab with strong vertical waveguide consisting of air/semiconductor/air are analyzed by three-dimensional (3-D) finite-difference time-domain simulation, and compared with that of two-dimensional (2-D) simulation under effective index approximation. Mode frequencies and field distributions inside the resonator obtained by the 3-D simulation are in good agreement with those of the 2-D approximation. However, field distributions at the boundary of the resonator obtained by 3-D simulation are different from that of the 2-D simulation, especially the vertical field distribution near the boundary is great different from that of the slab waveguide, which is used in the effective index approximation. Furthermore the quality factors obtained by 3-D simulation are much larger. than that by 2-D simulation for the square resonator slab with the strong vertical waveguide.
Resumo:
Quality factor enhancement due to mode coupling is observed in a three-dimensional microdisk resonator. The microdisk, which is vertically sandwiched between air and a substrate, with a radius of 1 mu m, a thickness of 0.2 mu m, and a refractive index of 3.4, is considered in a finite-difference time-domain (FDTD) numerical simulation. The mode quality factor of the fundamental mode HE71 decreases with an increase of the refractive index of the substrate, n(sub), from 2.0 to 3.17. However, the mode quality factor of the first-order mode HE72 reaches a peak value at n(sub) = 2.7 because of the mode coupling between the fundamental and the first-order modes. The variation of mode field distributions due to the mode coupling is also observed. This mechanism may be used to realize high-quality-factor modes in microdisks with high-refractive-index substrates. (c) 2006 Optical Society of America.
Resumo:
In the concept of dinuclear system, the quasifission rate from Kramers formula has been incorporated in the master equation in order to study the competition between fusion and qusifission. Mass yields of quasifission products of the three reactions Ca-48 + Pu-244, Ca-48 + U-238 and Fe-58 + Th-232 have been calculated, and the experimental data are reproduced very well, which is a critical test for the existing fusion model. Also we have shown the time evolution of the mass distributions of quasifission products, which provides valuable information of the process of competition between fusion and quasifission.
Resumo:
Isotope yield distributions in the multifragmentation regime were studied with high-quality isotope identification, focusing on the intermediate mass fragments (IMFs) produced in semiviolent collisions. The yields were analyzed within the framework of a modified Fisher model. Using the ratio of the mass-dependent symmetry energy coefficient relative to the temperature, a(sym)/T, extracted in previous work and that of the pairing term, a(p)/T, extracted from this work, and assuming that both reflect secondary decay processes, the experimentally observed isotope yields were corrected for these effects. For a given I = N - Z value, the corrected yields of isotopes relative to the yield of C-12 show a power law distribution Y (N, Z)/Y(C-12) similar to A(-tau) in the mass range 1 <= A <= 30, and the distributions are almost identical for the different reactions studied. The observed power law distributions change systematically when I of the isotopes changes and the extracted tau value decreases from 3.9 to 1.0 as I increases from -1 to 3. These observations are well reproduced by a simple deexcitation model, with which the power law distribution of the primary isotopes is determined to be tau(prim) = 2.4 +/- 0.2, suggesting that the disassembling system at the time of the fragment formation is indeed at, or very near, the critical point.
Resumo:
Gas chromatography-mass spectrometry with electron ionization and positive-ion chemical ionization and comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC x GC-TOF-MS) were applied for the characterization of the chemical composition of complex hydrocarbons in the non-polar neutral fraction of cigarette smoke condensates. Automated data processing by TOF-MS software combined with structured chromatograms and manual review of library hits were used to assign the components from GC x GC-TOF-MS analysis. The distributions of aliphatic hydrocarbons and aromatics were also investigated. Over 100 isoprenoid hydrocarbons were detected, including carotene degradation products, phytadiene isomers and carbocyclic diterpenoids. A total of 1800 hydrocarbons were tentatively identified, including aliphatic hydrocarbons, aromatics, and isoprenoid hydrocarbons. The identified hydrocarbons by GC x GC-TOF-MS were far more than those by GC-MS. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.
Resumo:
This work addresses two related questions. The first question is what joint time-frequency energy representations are most appropriate for auditory signals, in particular, for speech signals in sonorant regions. The quadratic transforms of the signal are examined, a large class that includes, for example, the spectrograms and the Wigner distribution. Quasi-stationarity is not assumed, since this would neglect dynamic regions. A set of desired properties is proposed for the representation: (1) shift-invariance, (2) positivity, (3) superposition, (4) locality, and (5) smoothness. Several relations among these properties are proved: shift-invariance and positivity imply the transform is a superposition of spectrograms; positivity and superposition are equivalent conditions when the transform is real; positivity limits the simultaneous time and frequency resolution (locality) possible for the transform, defining an uncertainty relation for joint time-frequency energy representations; and locality and smoothness tradeoff by the 2-D generalization of the classical uncertainty relation. The transform that best meets these criteria is derived, which consists of two-dimensionally smoothed Wigner distributions with (possibly oriented) 2-D guassian kernels. These transforms are then related to time-frequency filtering, a method for estimating the time-varying 'transfer function' of the vocal tract, which is somewhat analogous to ceptstral filtering generalized to the time-varying case. Natural speech examples are provided. The second question addressed is how to obtain a rich, symbolic description of the phonetically relevant features in these time-frequency energy surfaces, the so-called schematic spectrogram. Time-frequency ridges, the 2-D analog of spectral peaks, are one feature that is proposed. If non-oriented kernels are used for the energy representation, then the ridge tops can be identified, with zero-crossings in the inner product of the gradient vector and the direction of greatest downward curvature. If oriented kernels are used, the method can be generalized to give better orientation selectivity (e.g., at intersecting ridges) at the cost of poorer time-frequency locality. Many speech examples are given showing the performance for some traditionally difficult cases: semi-vowels and glides, nasalized vowels, consonant-vowel transitions, female speech, and imperfect transmission channels.
Resumo:
The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.