14 resultados para NONPARAMETRIC-INFERENCE

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organismal development, homeostasis, and pathology are rooted in inherently probabilistic events. From gene expression to cellular differentiation, rates and likelihoods shape the form and function of biology. Processes ranging from growth to cancer homeostasis to reprogramming of stem cells all require transitions between distinct phenotypic states, and these occur at defined rates. Therefore, measuring the fidelity and dynamics with which such transitions occur is central to understanding natural biological phenomena and is critical for therapeutic interventions.

While these processes may produce robust population-level behaviors, decisions are made by individual cells. In certain circumstances, these minuscule computing units effectively roll dice to determine their fate. And while the 'omics' era has provided vast amounts of data on what these populations are doing en masse, the behaviors of the underlying units of these processes get washed out in averages.

Therefore, in order to understand the behavior of a sample of cells, it is critical to reveal how its underlying components, or mixture of cells in distinct states, each contribute to the overall phenotype. As such, we must first define what states exist in the population, determine what controls the stability of these states, and measure in high dimensionality the dynamics with which these cells transition between states.

To address a specific example of this general problem, we investigate the heterogeneity and dynamics of mouse embryonic stem cells (mESCs). While a number of reports have identified particular genes in ES cells that switch between 'high' and 'low' metastable expression states in culture, it remains unclear how levels of many of these regulators combine to form states in transcriptional space. Using a method called single molecule mRNA fluorescent in situ hybridization (smFISH), we quantitatively measure and fit distributions of core pluripotency regulators in single cells, identifying a wide range of variabilities between genes, but each explained by a simple model of bursty transcription. From this data, we also observed that strongly bimodal genes appear to be co-expressed, effectively limiting the occupancy of transcriptional space to two primary states across genes studied here. However, these states also appear punctuated by the conditional expression of the most highly variable genes, potentially defining smaller substates of pluripotency.

Having defined the transcriptional states, we next asked what might control their stability or persistence. Surprisingly, we found that DNA methylation, a mark normally associated with irreversible developmental progression, was itself differentially regulated between these two primary states. Furthermore, both acute or chronic inhibition of DNA methyltransferase activity led to reduced heterogeneity among the population, suggesting that metastability can be modulated by this strong epigenetic mark.

Finally, because understanding the dynamics of state transitions is fundamental to a variety of biological problems, we sought to develop a high-throughput method for the identification of cellular trajectories without the need for cell-line engineering. We achieved this by combining cell-lineage information gathered from time-lapse microscopy with endpoint smFISH for measurements of final expression states. Applying a simple mathematical framework to these lineage-tree associated expression states enables the inference of dynamic transitions. We apply our novel approach in order to infer temporal sequences of events, quantitative switching rates, and network topology among a set of ESC states.

Taken together, we identify distinct expression states in ES cells, gain fundamental insight into how a strong epigenetic modifier enforces the stability of these states, and develop and apply a new method for the identification of cellular trajectories using scalable in situ readouts of cellular state.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.

A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.

In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.

The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.

The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.

In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.

In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.

In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We aim to characterize fault slip behavior during all stages of the seismic cycle in subduction megathrust environments with the eventual goal of understanding temporal and spatial variations of fault zone rheology, and to infer possible causal relationships between inter-, co- and post-seismic slip, as well as implications for earthquake and tsunami hazard. In particular we focus on analyzing aseismic deformation occurring during inter-seismic and post-seismic periods of the seismic cycle. We approach the problem using both Bayesian and optimization techniques. The Bayesian approach allows us to completely characterize the model parameter space by searching a posteriori estimates of the range of allowable models, to easily implement any kind of physically plausible a priori information and to perform the inversion without regularization other than that imposed by the parameterization of the model. However, the Bayesian approach computational expensive and not currently viable for quick response scenarios. Therefore, we also pursue improvements in the optimization inference scheme. We present a novel, robust and yet simple regularization technique that allows us to infer robust and somewhat more detailed models of slip on faults. We apply such methodologies, using simple quasi-static elastic models, to perform studies of inter- seismic deformation in the Central Andes subduction zone, and post-seismic deformation induced by the occurrence of the 2011 Mw 9.0 Tohoku-Oki earthquake in Japan. For the Central Andes, we present estimates of apparent coupling probability of the subduction interface and analyze its relationship to past earthquakes in the region. For Japan, we infer high spatial variability in material properties of the megathrust offshore Tohoku. We discuss the potential for a large earthquake just south of the Tohoku-Oki earthquake where our inferences suggest dominantly aseismic behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For damaging response, the force-displacement relationship of a structure is highly nonlinear and history-dependent. For satisfactory analysis of such behavior, it is important to be able to characterize and to model the phenomenon of hysteresis accurately. A number of models have been proposed for response studies of hysteretic structures, some of which are examined in detail in this thesis. There are two popular classes of models used in the analysis of curvilinear hysteretic systems. The first is of the distributed element or assemblage type, which models the physical behavior of the system by using well-known building blocks. The second class of models is of the differential equation type, which is based on the introduction of an extra variable to describe the history dependence of the system.

Owing to their mathematical simplicity, the latter models have been used extensively for various applications in structural dynamics, most notably in the estimation of the response statistics of hysteretic systems subjected to stochastic excitation. But the fundamental characteristics of these models are still not clearly understood. A response analysis of systems using both the Distributed Element model and the differential equation model when subjected to a variety of quasi-static and dynamic loading conditions leads to the following conclusion: Caution must be exercised when employing the models belonging to the second class in structural response studies as they can produce misleading results.

The Massing's hypothesis, originally proposed for steady-state loading, can be extended to general transient loading as well, leading to considerable simplification in the analysis of the Distributed Element models. A simple, nonparametric identification technique is also outlined, by means of which an optimal model representation involving one additional state variable is determined for hysteretic systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.

To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.

I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.

I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.

With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.

Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.

Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.

While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.

Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.

I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.

In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.

The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.

The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the behavior of granular materials at three length scales. At the smallest length scale, the grain-scale, we study inter-particle forces and "force chains". Inter-particle forces are the natural building blocks of constitutive laws for granular materials. Force chains are a key signature of the heterogeneity of granular systems. Despite their fundamental importance for calibrating grain-scale numerical models and elucidating constitutive laws, inter-particle forces have not been fully quantified in natural granular materials. We present a numerical force inference technique for determining inter-particle forces from experimental data and apply the technique to two-dimensional and three-dimensional systems under quasi-static and dynamic load. These experiments validate the technique and provide insight into the quasi-static and dynamic behavior of granular materials.

At a larger length scale, the mesoscale, we study the emergent frictional behavior of a collection of grains. Properties of granular materials at this intermediate scale are crucial inputs for macro-scale continuum models. We derive friction laws for granular materials at the mesoscale by applying averaging techniques to grain-scale quantities. These laws portray the nature of steady-state frictional strength as a competition between steady-state dilation and grain-scale dissipation rates. The laws also directly link the rate of dilation to the non-steady-state frictional strength.

At the macro-scale, we investigate continuum modeling techniques capable of simulating the distinct solid-like, liquid-like, and gas-like behaviors exhibited by granular materials in a single computational domain. We propose a Smoothed Particle Hydrodynamics (SPH) approach for granular materials with a viscoplastic constitutive law. The constitutive law uses a rate-dependent and dilation-dependent friction law. We provide a theoretical basis for a dilation-dependent friction law using similar analysis to that performed at the mesoscale. We provide several qualitative and quantitative validations of the technique and discuss ongoing work aiming to couple the granular flow with gas and fluid flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have sought to determine the nature of the free-radical precursors to ring-opened hydrocarbon 5 and ring-closed hydrocarbon 6. Reasonable alternative formulations involve the postulation of hydrogen abstraction (a) by a pair of rapidly equilibrating classical radicals (the ring-opened allylcarbinyl-type radical 3 and the ring-closed cyclopropylcarbinyl-type 4), or (b) by a nonclassical radical such as homoallylic radical 7.

[Figure not reproduced.]

Entry to the radical system is gained via degassed thermal decomposition of peresters having the ring-opened and the ring-closed structures. The ratio of 6:5 is essentially independent of the hydrogen donor concentration for decomposition of the former at 125° in the presence of triethyltin hydrdride. A deuterium labeling study showed that the α and β methylene groups in 3 (or the equivalent) are rapidly interchanged under these conditions.

Existence of two (or more) product-forming intermediates is indicated (a) by dependence of the ratio 6:5 on the tin hydride concentration for decomposition of the ring-closed perester at 10 and 35°, and (b) by formation of cage products having largely or wholly the structure (ring-opened or ring-closed) of the starting perester.

Relative rates of hydrogen abstraction by 3 could be inferred by comparison of ratios of rate constants for hydrogen abstraction and ortho-ring cyclization:

[Figure not reproduced.]

At 100° values of ka/kr are 0.14 for hydrogen abstraction from 1,4-cyclohexadiene and 7 for abstraction from triethyltin hydride. The ratio 6:5 at the same temperature is ~0.0035 for hydrogen abstraction from 1,4-cyclohexadiene, ~0.078 for abstraction from the tin hydride, and ≥ 5 for abstraction from cyclohexadienyl radicals. These data indicate that abstraction of hydrogen from triethyltin hydride is more rapid than from 1,4-cyclohexadiene by a factor of ~1000 for 4, but only ~50 for 3.

Measurements of product ratios at several temperatures allowed the construction of an approximate energy-level scheme. A major inference is that isomerization of 3 to 4 is exothermic by 8 ± 3 kcal/mole, in good agreement with expectations based on bond dissociation energies. Absolute rate-constant estimates are also given.

The results are nicely compatible with a classical-radical mechanism, but attempted interpretation in terms of a nonclassical radical precursor of product ratios formed even from equilibrated radical intermediates leads, it is argued, to serious difficulties.

The roles played by hydrogen abstraction from 1,4,-cyclohexadiene and from the derived cyclohexadienyl radicals were probed by fitting observed ratios of 6:5 and 5:10 in the sense of least-squares to expressions derived for a complex mechanistic scheme. Some 30 to 40 measurements on each product ratio, obtained under a variety of experimental conditions, could be fit with an average deviation of ~6%. Significant systematic deviations were found, but these could largely be redressed by assuming (a) that the rate constant for reaction of 4 with cyclohexadienyl radical is inversely proportional to the viscosity of the medium (i.e., is diffusion-controlled), and (b) that ka/kr for hydrogen abstraction from 1,4-cyclohexadiene depends slightly on the composition of the medium. An average deviation of 4.4% was thereby attained.

Degassed thermal decomposition of the ring-opened perester in the presence of the triethyltin hydride occurs primarily by attack on perester of triethyltin radicals, presumably at the –O-O- bond, even at 0.01 M tin hydride at 100 and 125°. Tin ester and tin ether are apparently formed in closely similar amounts under these conditions, but the tin ester predominates at room temperature in the companion air-induced decomposition, indicating that attack on perester to give the tin ether requires an activation energy approximately 5 kcal/mole in excess of that for the formation of tin ester.