15 resultados para hypotheses

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I of the thesis describes the olfactory searching and scanning behaviors of rats in a wind tunnel, and a detailed movement analysis of terrestrial arthropod olfactory scanning behavior. Olfactory scanning behaviors in rats may be a behavioral correlate to hippocampal place cell activity.

Part II focuses on the organization of olfactory perception, what it suggests about a natural order for chemicals in the environment, and what this in tum suggests about the organization of the olfactory system. A model of odor quality space (analogous to the "color wheel") is presented. This model defines relationships between odor qualities perceived by human subjects based on a quantitative similarity measure. Compounds containing Carbon, Nitrogen, or Sulfur elicit odors that are contiguous in this odor representation, which thus allows one to predict the broad class of odor qualities a compound is likely to elicit. Based on these findings, a natural organization for olfactory stimuli is hypothesized: the order provided by the metabolic process. This hypothesis is tested by comparing compounds that are structurally similar, perceptually similar, and metabolically similar in a psychophysical cross-adaptation paradigm. Metabolically similar compounds consistently evoked shifts in odor quality and intensity under cross-adaptation, while compounds that were structurally similar or perceptually similar did not. This suggests that the olfactory system may process metabolically similar compounds using the same neural pathways, and that metabolic similarity may be the fundamental metric about which olfactory processing is organized. In other words, the olfactory system may be organized around a biological basis.

The idea of a biological basis for olfactory perception represents a shift in how olfaction is understood. The biological view has predictive power while the current chemical view does not, and the biological view provides explanations for some of the most basic questions in olfaction, that are unanswered in the chemical view. Existing data do not disprove a biological view, and are consistent with basic hypotheses that arise from this viewpoint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deference to committees in Congress has been a much studied phenomena for close to 100 years. This deference can be characterized as the unwillingness of a potentially winning coalition on the House floor to impose its will on a small minority, a standing committee. The congressional scholar is then faced with two problems: observing such deference to committees, and explaining it. Shepsle and Weingast have proposed the existence of an ex-post veto for standing committees as an explanation of committee deference. They claim that as conference reports in the House and Senate are considered under a rule that does not allow amendments, the conferees enjoy agenda-setting power. In this paper I describe a test of such a hypothesis (along with competing hypotheses regarding the effects of the conference procedure). A random-utility model is utilized to estimate legislators' ideal points on appropriations bills from 1973 through 1980. I prove two things: 1) that committee deference can not be said to be a result of the conference procedure; and moreover 2) that committee deference does not appear to exist at all.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flies are particularly adept at balancing the competing demands of delay tolerance, performance, and robustness during flight, which invites thoughtful examination of their multimodal feedback architecture. This dissertation examines stabilization requirements for inner-loop feedback strategies in the flapping flight of Drosophila, the fruit fly, against the backdrop of sensorimotor transformations present in the animal. Flies have evolved multiple specializations to reduce sensorimotor latency, but sensory delay during flight is still significant on the timescale of body dynamics. I explored the effect of sensor delay on flight stability and performance for yaw turns using a dynamically-scaled robot equipped with a real-time feedback system that performed active turns in response to measured yaw torque. The results show a fundamental tradeoff between sensor delay and permissible feedback gain, and suggest that fast mechanosensory feedback provides a source of active damping that compliments that contributed by passive effects. Presented in the context of these findings, a control architecture whereby a haltere-mediated inner-loop proportional controller provides damping for slower visually-mediated feedback is consistent with tethered-flight measurements, free-flight observations, and engineering design principles. Additionally, I investigated how flies adjust stroke features to regulate and stabilize level forward flight. The results suggest that few changes to hovering kinematics are actually required to meet steady-state lift and thrust requirements at different flight speeds, and the primary driver of equilibrium velocity is the aerodynamic pitch moment. This finding is consistent with prior hypotheses and observations regarding the relationship between body pitch and flight speed in fruit flies. The results also show that the dynamics may be stabilized with additional pitch damping, but the magnitude of required damping increases with flight speed. I posit that differences in stroke deviation between the upstroke and downstroke might play a critical role in this stabilization. Fast mechanosensory feedback of the pitch rate could enable active damping, which would inherently exhibit gain scheduling with flight speed if pitch torque is regulated by adjusting stroke deviation. Such a control scheme would provide an elegant solution for flight stabilization across a wide range of flight speeds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past few decades, ferromagnetic spinwave resonance in magnetic thin films has been used as a tool for studying the properties of magnetic materials. A full understanding of the boundary conditions at the surface of the magnetic material is extremely important. Such an understanding has been the general objective of this thesis. The approach has been to investigate various hypotheses of the surface condition and to compare the results of these models with experimental data. The conclusion is that the boundary conditions are largely due to thin surface regions with magnetic properties different from the bulk. In the calculations these regions were usually approximated by uniform surface layers; the spins were otherwise unconstrained except by the same mechanisms that exist in the bulk (i.e., no special "pinning" at the surface atomic layer is assumed). The variation of the ferromagnetic spinwave resonance spectra in YIG films with frequency, temperature, annealing, and orientation of applied field provided an excellent experimental basis for the study.

This thesis can be divided into two parts. The first part is ferromagnetic resonance theory; the second part is the comparison of calculated with experimental data in YIG films. Both are essential in understanding the conclusion that surface regions with properties different from the bulk are responsible for the resonance phenomena associated with boundary conditions.

The theoretical calculations have been made by finding the wave vectors characteristic of the magnetic fields inside the magnetic medium, and then combining the fields associated with these wave vectors in superposition to match the specified boundary conditions. In addition to magnetic boundary conditions required for the surface layer model, two phenomenological magnetic boundary conditions are discussed in detail. The wave vectors are easily found by combining the Landau-Lifshitz equations with Maxwell's equations. Mode positions are most easily predicted from the magnetic wave vectors obtained by neglecting damping, conductivity, and the displacement current. For an insulator where the driving field is nearly uniform throughout the sample, these approximations permit a simple yet accurate calculation of the mode intensities. For metal films this calculation may be inaccurate but the mode positions are still accurately described. The techniques necessary for calculating the power absorbed by the film under a specific excitation including the effects of conductivity, displacement current and damping are also presented.

In the second part of the thesis the properties of magnetic garnet materials are summarized and the properties believed associated with the two surface regions of a YIG film are presented. Finally, the experimental data and calculated data for the surface layer model and other proposed models are compared. The conclusion of this study is that the remarkable variety of spinwave spectra that arises from various preparation techniques and subsequent treatments can be explained by surface regions with magnetic properties different from the bulk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the worldwide prevalence of diabetes mellitus continues to increase, diabetic retinopathy remains the leading cause of visual impairment and blindness in many developed countries. Between 32 to 40 percent of about 246 million people with diabetes develop diabetic retinopathy. Approximately 4.1 million American adults 40 years and older are affected by diabetic retinopathy. This glucose-induced microvascular disease progressively damages the tiny blood vessels that nourish the retina, the light-sensitive tissue at the back of the eye, leading to retinal ischemia (i.e., inadequate blood flow), retinal hypoxia (i.e., oxygen deprivation), and retinal nerve cell degeneration or death. It is a most serious sight-threatening complication of diabetes, resulting in significant irreversible vision loss, and even total blindness.

Unfortunately, although current treatments of diabetic retinopathy (i.e., laser therapy, vitrectomy surgery and anti-VEGF therapy) can reduce vision loss, they only slow down but cannot stop the degradation of the retina. Patients require repeated treatment to protect their sight. The current treatments also have significant drawbacks. Laser therapy is focused on preserving the macula, the area of the retina that is responsible for sharp, clear, central vision, by sacrificing the peripheral retina since there is only limited oxygen supply. Therefore, laser therapy results in a constricted peripheral visual field, reduced color vision, delayed dark adaptation, and weakened night vision. Vitrectomy surgery increases the risk of neovascular glaucoma, another devastating ocular disease, characterized by the proliferation of fibrovascular tissue in the anterior chamber angle. Anti-VEGF agents have potential adverse effects, and currently there is insufficient evidence to recommend their routine use.

In this work, for the first time, a paradigm shift in the treatment of diabetic retinopathy is proposed: providing localized, supplemental oxygen to the ischemic tissue via an implantable MEMS device. The retinal architecture (e.g., thickness, cell densities, layered structure, etc.) of the rabbit eye exposed to ischemic hypoxic injuries was well preserved after targeted oxygen delivery to the hypoxic tissue, showing that the use of an external source of oxygen could improve the retinal oxygenation and prevent the progression of the ischemic cascade.

The proposed MEMS device transports oxygen from an oxygen-rich space to the oxygen-deficient vitreous, the gel-like fluid that fills the inside of the eye, and then to the ischemic retina. This oxygen transport process is purely passive and completely driven by the gradient of oxygen partial pressure (pO2). Two types of devices were designed. For the first type, the oxygen-rich space is underneath the conjunctiva, a membrane covering the sclera (white part of the eye), beneath the eyelids and highly permeable to oxygen in the atmosphere when the eye is open. Therefore, sub-conjunctival pO2 is very high during the daytime. For the second type, the oxygen-rich space is inside the device since pure oxygen is needle-injected into the device on a regular basis.

To prevent too fast or too slow permeation of oxygen through the device that is made of parylene and silicone (two widely used biocompatible polymers in medical devices), the material properties of the hybrid parylene/silicone were investigated, including mechanical behaviors, permeation rates, and adhesive forces. Then the thicknesses of parylene and silicone became important design parameters that were fine-tuned to reach the optimal oxygen permeation rate.

The passive MEMS oxygen transporter devices were designed, built, and tested in both bench-top artificial eye models and in-vitro porcine cadaver eyes. The 3D unsteady saccade-induced laminar flow of water inside the eye model was modeled by computational fluid dynamics to study the convective transport of oxygen inside the eye induced by saccade (rapid eye movement). The saccade-enhanced transport effect was also demonstrated experimentally. Acute in-vivo animal experiments were performed in rabbits and dogs to verify the surgical procedure and the device functionality. Various hypotheses were confirmed both experimentally and computationally, suggesting that both the two types of devices are very promising to cure diabetic retinopathy. The chronic implantation of devices in ischemic dog eyes is still underway.

The proposed MEMS oxygen transporter devices can be also applied to treat other ocular and systemic diseases accompanied by retinal ischemia, such as central retinal artery occlusion, carotid artery disease, and some form of glaucoma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The propagation of cosmic rays through interstellar space has been investigated with the view of determining what particles can traverse astronomical distances without serious loss of energy. The principal method of loss of energy of high energy particles is by interaction with radiation. It is found that high energy (1013-1018ev) electrons drop to one-tenth their energy in 108 light years in the radiation density in the galaxy and that protons are not significantly affected in this distance. The origin of the cosmic rays is not known so that various hypotheses as to their origin are examined. If the source is near a star it is found that the interaction of electrons and photons with the stellar radiation field and the interaction of electrons with the stellar magnetic field limit the amount of energy which these particles can carry away from the star. However, the interaction is not strong enough to affect the energy of protons or light nuclei appreciably. The chief uncertainty in the results is due to the possible existence of general galactic magnetic field. The main conclusion reached is that if there is a general galactic magnetic field, then the primary spectrum has very few photons, only low energy (˂ 1013 ev) electrons and the higher energy particles are primarily protons regardless of the source mechanism, and if there is no general galactic magnetic field, then the source of cosmic rays accelerates mainly protons and the present rate of production is much less than that in the past.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.

A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.

To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.

An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.

The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.

A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.

The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model for some of the many physical-chemical and biological processes in intermittent sand filtration of wastewaters is described and an expression for oxygen transfer is formulated.

The model assumes that aerobic bacterial activity within the sand or soil matrix is limited, mostly by oxygen deficiency, while the surface is ponded with wastewater. Atmospheric oxygen reenters into the soil after infiltration ends. Aerobic activity is resumed, but the extent of penetration of oxygen is limited and some depths may be always anaerobic. These assumptions lead to the conclusion that the percolate shows large variations with respect to the concentration of certain contaminants, with some portions showing little change in a specific contaminant. Analyses of soil moisture in field studies and of effluent from laboratory sand columns substantiated the model.

The oxygen content of the system at sufficiently long times after addition of wastes can be described by a quasi-steady-state diffusion equation including a term for an oxygen sink. Measurements of oxygen content during laboratory and field studies show that the oxygen profile changes only slightly up to two days after the quasi-steady state is attained.

Results of these hypotheses and experimental verification can be applied in the operation of existing facilities and in the interpretation of data from pilot plant-studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This investigation deals with certain generalizations of the classical uniqueness theorem for the second boundary-initial value problem in the linearized dynamical theory of not necessarily homogeneous nor isotropic elastic solids. First, the regularity assumptions underlying the foregoing theorem are relaxed by admitting stress fields with suitably restricted finite jump discontinuities. Such singularities are familiar from known solutions to dynamical elasticity problems involving discontinuous surface tractions or non-matching boundary and initial conditions. The proof of the appropriate uniqueness theorem given here rests on a generalization of the usual energy identity to the class of singular elastodynamic fields under consideration.

Following this extension of the conventional uniqueness theorem, we turn to a further relaxation of the customary smoothness hypotheses and allow the displacement field to be differentiable merely in a generalized sense, thereby admitting stress fields with square-integrable unbounded local singularities, such as those encountered in the presence of focusing of elastic waves. A statement of the traction problem applicable in these pathological circumstances necessitates the introduction of "weak solutions'' to the field equations that are accompanied by correspondingly weakened boundary and initial conditions. A uniqueness theorem pertaining to this weak formulation is then proved through an adaptation of an argument used by O. Ladyzhenskaya in connection with the first boundary-initial value problem for a second-order hyperbolic equation in a single dependent variable. Moreover, the second uniqueness theorem thus obtained contains, as a special case, a slight modification of the previously established uniqueness theorem covering solutions that exhibit only finite stress-discontinuities.