950 resultados para Gaussian random fields


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study summarizes the results of a survey designed to provide economic information about the financial status of commercial reef fish boats with homeports in the Florida Keys. A survey questionnaire was administered in the summer and fall of 1994 by interviewers in face-to-face meetings with owners or operators of randomly selected boats. Fishermen were asked for background information about themselves and their boats, their capital investments in boats and equipment, and about their average catches, revenues, and costs per trip for their two most important kinds of fishing trips during 1993 for species in the reef fish fishery. Respondents were characterized with regard to their dependence on the reef fish fishery as a source of household income. Boats were described in terms of their physical and financial characteristics. Different kinds of fishing trips were identified by the species that generated the greatest revenue. Trips were grouped into the following categories: yellowtail snapper (Ocyurus chrysurus); mutton snapper (Lutjanus analis), black grouper (Mycteroperca bonaci), or red grouper (Epinephelus morio); gray snapper (Lutjanus griseus); deeper water groupers and tilefishes; greater amberjack (Seriola dumerili); spiny lobster (Panulirus argus); king mackerel (Scomberomorus cavalla); and dolphin (Coryphaena hippurus). Average catches, revenues, routine trip costs, and net operating revenues per boat per trip and per boat per year were estimated for each category of fishing trips. In addition to its descriptive value, data collected during this study will aid in future examinations of the economic effects of various regulations on commercial reef fish fishermen.(PDF file contains 48 pages.)

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the hybrid approach of large-eddy simulation (LES) and Lighthill’s acoustic analogy for turbulence-generated sound, the turbulence source fields are obtained using an LES and the turbulence-generated sound at far fields is calculated from Lighthill’s acoustic analogy. As only the velocity fields at resolved scales are available from the LES, the Lighthill stress tensor, serving as a source term in Lighthill’s acoustic equation, has to be evaluated from the resolved velocity fields. As a result, the contribution from the unresolved velocity fields is missing in the conventional LES. The sound of missing scales is shown to be important and hence needs to be modeled. The present study proposes a kinematic subgrid-scale (SGS) model which recasts the unresolved velocity fields into Lighthill’s stress tensors. A kinematic simulation is used to construct the unresolved velocity fields with the imposed temporal statistics, which is consistent with the random sweeping hypothesis. The kinematic SGS model is used to calculate sound power spectra from isotropic turbulence and yields an improved result: the missing portion of the sound power spectra is approximately recovered in the LES.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a technique for obtaining the stochastic response of a nonlinear continuous system. First, the general method of nonstationary continuous equivalent linearization is developed. This technique allows replacement of the original nonlinear system with a time-varying linear continuous system. Next, a numerical implementation is described which allows solution of complex problems on a digital computer. In this procedure, the linear replacement system is discretized by the finite element method. Application of this method to systems satisfying the one-dimensional wave equation with two different types of constitutive nonlinearities is described. Results are discussed for nonlinear stress-strain laws of both hardening and softening types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an experimentally feasible scheme to generate various types of entangled states of light fields by using beam splitters and single-photon detectors. Two beams of light fields are incident on two beam splitters respectively with each beam being asymmetrically split into two parts in which one part is supposed to be so weak that it contains at most one photon. We let the two weak output modes interfere at a third beam splitter. A conditional joint measurement on both weak output modes may result in an entanglement between the other two output modes. The conditions for the maximal entanglement are discussed based on the concurrence. Several specific examples are also examined.