932 resultados para Models and Principles


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This panel will discuss the research being conducted, and the models being used in three current coastal EPA studies being conducted on ecosystem services in Tampa Bay, the Chesapeake Bay and the Coastal Carolinas. These studies are intended to provide a broader and more comprehensive approach to policy and decision-making affecting coastal ecosystems as well as provide an account of valued services that have heretofore been largely unrecognized. Interim research products, including updated and integrated spatial data, models and model frameworks, and interactive decision support systems will be demonstrated to engage potential users and to elicit feedback. It is anticipated that the near-term impact of the projects will be to increase the awareness by coastal communities and coastal managers of the implications of their actions and to foster partnerships for ecosystem services research and applications. (PDF contains 4 pages)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.

In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.

The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.

A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

29 p.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A fundamental question in neuroscience is how distributed networks of neurons communicate and coordinate dynamically and specifically. Several models propose that oscillating local networks can transiently couple to each other through phase-locked firing. Coherent local field potentials (LFP) between synaptically connected regions is often presented as evidence for such coupling. The physiological correlates of LFP signals depend on many anatomical and physiological factors, however, and how the underlying neural processes collectively generate features of different spatiotemporal scales is poorly understood. High frequency oscillations in the hippocampus, including gamma rhythms (30-100 Hz) that are organized by the theta oscillations (5-10 Hz) during active exploration and REM sleep, as well as sharp wave-ripples (SWRs, 140-200 Hz) during immobility or slow wave sleep, have each been associated with various aspects of learning and memory. Deciphering their physiology and functional consequences is crucial to understanding the operation of the hippocampal network.

We investigated the origins and coordination of high frequency LFPs in the hippocampo-entorhinal network using both biophysical models and analyses of large-scale recordings in behaving and sleeping rats. We found that the synchronization of pyramidal cell spikes substantially shapes, or even dominates, the electrical signature of SWRs in area CA1 of the hippocampus. The precise mechanisms coordinating this synchrony are still unresolved, but they appear to also affect CA1 activity during theta oscillations. The input to CA1, which often arrives in the form of gamma-frequency waves of activity from area CA3 and layer 3 of entorhinal cortex (EC3), did not strongly influence the timing of CA1 pyramidal cells. Rather, our data are more consistent with local network interactions governing pyramidal cells' spike timing during the integration of their inputs. Furthermore, the relative timing of input from EC3 and CA3 during the theta cycle matched that found in previous work to engage mechanisms for synapse modification and active dendritic processes. Our work demonstrates how local networks interact with upstream inputs to generate a coordinated hippocampal output during behavior and sleep, in the form of theta-gamma coupling and SWRs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.

We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wide field-of-view (FOV) microscopy is of high importance to biological research and clinical diagnosis where a high-throughput screening of samples is needed. This thesis presents the development of several novel wide FOV imaging technologies and demonstrates their capabilities in longitudinal imaging of living organisms, on the scale of viral plaques to live cells and tissues.

The ePetri Dish is a wide FOV on-chip bright-field microscope. Here we applied an ePetri platform for plaque analysis of murine norovirus 1 (MNV-1). The ePetri offers the ability to dynamically track plaques at the individual cell death event level over a wide FOV of 6 mm × 4 mm at 30 min intervals. A density-based clustering algorithm is used to analyze the spatial-temporal distribution of cell death events to identify plaques at their earliest stages. We also demonstrate the capabilities of the ePetri in viral titer count and dynamically monitoring plaque formation, growth, and the influence of antiviral drugs.

We developed another wide FOV imaging technique, the Talbot microscope, for the fluorescence imaging of live cells. The Talbot microscope takes advantage of the Talbot effect and can generate a focal spot array to scan the fluorescence samples directly on-chip. It has a resolution of 1.2 μm and a FOV of ~13 mm2. We further upgraded the Talbot microscope for the long-term time-lapse fluorescence imaging of live cell cultures, and analyzed the cells’ dynamic response to an anticancer drug.

We present two wide FOV endoscopes for tissue imaging, named the AnCam and the PanCam. The AnCam is based on the contact image sensor (CIS) technology, and can scan the whole anal canal within 10 seconds with a resolution of 89 μm, a maximum FOV of 100 mm × 120 mm, and a depth-of-field (DOF) of 0.65 mm. We also demonstrate the performance of the AnCam in whole anal canal imaging in both animal models and real patients. In addition to this, the PanCam is based on a smartphone platform integrated with a panoramic annular lens (PAL), and can capture a FOV of 18 mm × 120 mm in a single shot with a resolution of 100─140 μm. In this work we demonstrate the PanCam’s performance in imaging a stained tissue sample.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Freshwater ecosystems are highly dynamic and change on time-scales that range from a few hours to several months. The development of models that simulate these processes is often hampered by the lack of sufficient data to parameterize the processes and validate the models. In this article, I review some of the challenges posed by this lack of information and suggest ways in which they can be met by using automatic monitoring systems. One of these studies is the project tempQsim (EVK1-CT2002-00112) funded by the European Commission. In this project, detailed field and model analyses have been performed at eight catchment study sites in south and south-east Europe. A number of perceptual models for the study sites have been established, and results are being used to improve selected catchment models and provide a more adequate description of pollution dynamics. Results from the extensive field studies and model tests are now being used to derive recommendations for more tailored monitoring concepts in highly dynamic, but ‘data scarce’ environments, such as are frequently found in Mediterranean river basins. The author includes implications of the EU Water Framework Directive on monitoring methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analytic technique is developed that couples to finite difference calculations to extend the results to arbitrary distance. Finite differences and the analytic result, a boundary integral called two-dimensional Kirchhoff, are applied to simple models and three seismological problems dealing with data. The simple models include a thorough investigation of the seismologic effects of a deep continental basin. The first problem is explosions at Yucca Flat, in the Nevada test site. By modeling both near-field strong-motion records and teleseismic P-waves simultaneously, it is shown that scattered surface waves are responsible for teleseismic complexity. The second problem deals with explosions at Amchitka Island, Alaska. The near-field seismograms are investigated using a variety of complex structures and sources. The third problem involves regional seismograms of Imperial Valley, California earthquakes recorded at Pasadena, California. The data are shown to contain evidence of deterministic structure, but lack of more direct measurements of the structure and possible three-dimensional effects make two-dimensional modeling of these data difficult.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Environmental changes may have an impact on life conditions of the fish, e.g. food supply for the fish. The prevailing environmental conditions apply evenly to all age groups of one stock. Small fish have high growth rates, whereas large fish grow with low rates. But, it can be shown on the basis of the von Bertalanffy-growth model that it is sufficient to know only the growth rate of one single age group to compute the growth rates of all other age groups. The growth rate of a reference fish GRF (e.g. a fish with a body mass of 1 kg) was introduced as a reference growth describing the current food condition of all age groups of the stock. As an example a time series of the reference-growth rate of the northern cod stock (NAFO, 3K) was computed for the time span 1979 to 1999. For the northern cod stock it can be observed that environmental conditions caused growth rates below the long-term mean for seven years in a row. After a prolonged hunger period the fish stock collapsed in 1992 also by the impact of fisheries - and this was probably not a coincidence. Now, with the reference-growth rate GRF a simple and handy parameter was found to summarize the influence of the environmental conditions on growth and other derived models and therefore makes it easier to compute the influence of environmental changes within stock assessment. Zusammenfassung Veränderungen der Umwelt können Auswirkungen auf die Lebensbedingungen der Fische haben, z. B. auf das Nahrungsangebot der Fische. Die vorherrschenden Umgebungsbedingungen wirken gleichmäßig auf alle Altersgruppen eines Bestandes, wobei typischer Weise kleineFische hohe Wachstumsraten haben, während die großen Fische mit niedrigen Raten wachsen. Auf der Grundlage des von Bertalanffy-Wachstumsmodells kann gezeigt werden, dass es ausreicht, nur die Wachstumsrate von einer einzigen Altersgruppe zu kennen, um die Wachstumsraten von allen anderen Altersgruppen berechnen zu können. Die Wachstumsrate eines Referenz-Fisches (z.B. eines Fisches mit einer Körpermasse von 1 kg) wurde als Referenz-Wachstum GRF eingeführt, die den aktuellen Zustand des Nahrungsangebots füralle Altersgruppen des Bestandes beschreibt. Als Beispiel wurde einer Zeitreihe der Referenz-Wachstumsraten des nördlichen Kabeljaubestandes (NAFO, 3K) für die Zeitsraum 1979 bis 1999 berechnet. Für diesen Kabeljaubestand war zu beobachten, dass Umgebungsbedingungen für sieben Jahre in Folge Wachstumsraten unter dem langjährigen Mittelwert verursachten. Nach einer längeren Hungerperiode kollabierte dieser Fischbestand im Jahr 1992 auch durch den Einfluß der Fischerei - und dies war sicher kein Zufall. Jetzt, mit der Referenz-Wachstumsrate GRF, ist ein einfacher und handlicher Parameter gefunden, der es gestattet den Einfluss der Umweltbedingungen auf die Wachstumsbedingungen und andere davon abgeleitete Modelle zusammenzufassen. Dies macht es einfach, den Einfluss von Umweltveränderungen innerhalb der Bestandsabschätzungen zu berechnen.