967 resultados para Flail space model
Resumo:
We investigate key characteristics of Ca²⁺ puffs in deterministic and stochastic frameworks that all incorporate the cellular morphology of IP[subscript]3 receptor channel clusters. In a first step, we numerically study Ca²⁺ liberation in a three dimensional representation of a cluster environment with reaction-diffusion dynamics in both the cytosol and the lumen. These simulations reveal that Ca²⁺ concentrations at a releasing cluster range from 80 µM to 170 µM and equilibrate almost instantaneously on the time scale of the release duration. These highly elevated Ca²⁺ concentrations eliminate Ca²⁺ oscillations in a deterministic model of an IP[subscript]3R channel cluster at physiological parameter values as revealed by a linear stability analysis. The reason lies in the saturation of all feedback processes in the IP[subscript]3R gating dynamics, so that only fluctuations can restore experimentally observed Ca²⁺ oscillations. In this spirit, we derive master equations that allow us to analytically quantify the onset of Ca²⁺ puffs and hence the stochastic time scale of intracellular Ca²⁺ dynamics. Moving up the spatial scale, we suggest to formulate cellular dynamics in terms of waiting time distribution functions. This approach prevents the state space explosion that is typical for the description of cellular dynamics based on channel states and still contains information on molecular fluctuations. We illustrate this method by studying global Ca²⁺ oscillations.
Resumo:
This study highlights the importance of cognition-affect interaction pathways in the construction of mathematical knowledge. Scientific output demands further research on the conceptual structure underlying such interaction aimed at coping with the high complexity of its interpretation. The paper discusses the effectiveness of using a dynamic model such as that outlined in the Mathematical Working Spaces (MWS) framework, in order to describe the interplay between cognition and affect in the transitions from instrumental to discursive geneses in geometrical reasoning. The results based on empirical data from a teaching experiment at a middle school show that the use of dynamic geometry software favours students’ attitudinal and volitional dimensions and helps them to maintain productive affective pathways, affording greater intellectual independence in mathematical work and interaction with the context that impact learning opportunities in geometric proofs. The reflective and heuristic dimensions of teacher mediation in students’ learning is crucial in the transition from instrumental to discursive genesis and working stability in the Instrumental-Discursive plane of MWS.
Resumo:
The curated commons is a model in which a flexible library building shell and its infrastructure can respond to the specific time-sensitive needs of differing clients. It applies to faculty research, in particular small science activities (as opposed to big science activities that have major support which includes proprietary laboratories and facilities). It provides for sustained transformation of library facilities as well as its utilitarian and cyber-infrastructures to become a flexible reconfigurable space with cutting edge technology and sustained funding streams.
Resumo:
This research investigated annular field reversed configuration (AFRC)devices for high power electric propulsion by demonstrating the acceleration of these plasmoids using an experimental prototype and measuring the plasmoid's velocity, impulse, and energy efficiency. The AFRC plasmoid translation experiment was design and constructed with the aid of a dynamic circuit model. Two versions of the experiment were built, using underdamped RLC circuits at 10 kHz and 20 kHz. Input energies were varied from 100 J/pulse to 1000 J/pulse for the 10 kHz bank and 100 J/pulse for the 20 kHz bank. The plasmoids were formed in static gas fill of argon, from 1 mTorr to 50 mTorr. The translation of the plasmoid was accomplished by incorporating a small taper into the outer coil, with a half angle of 2°. Magnetic field diagnostics, plasma probes, and single-frame imaging were used to measure the plasmoid's velocity and to diagnose plasmoid behavior. Full details of the device design, construction, and diagnostics are provided in this dissertation. The results from the experiment demonstrated that a repeatable AFRC plasmoid was produced between the coils, yet failed to translate for all tested conditions. The data revealed the plasmoid was limited in lifetime to only a few (4-10) μs, too short for translation at low energy. A global stability study showed that the plasma suffered a radial collapse onto the inner wall early in its lifecycle. The radial collapse was traced to a magnetic pressure imbalance. A correction made to the circuit was successful in restoring an equilibrium pressure balance and prolonging radial stability by an additional 2.5 μs. The equilibrium state was sufficient to confirm that the plasmoid current in an AFRC reaches a steady-state prior to the peak of the coil currents. This implies that the plasmoid will always be driven to the inner wall, unless it translates from the coils prior to peak coil currents. However, ejection of the plasmoid before the peak coil currents results in severe efficiency losses. These results demonstrate the difficulty in designing an AFRC experiment for translation as balancing the different requirements for stability, balance, and efficient translation can have competing consequences.
Resumo:
The International Space Station (ISS) requires a substantial amount of potable water for use by the crew. The economic and logistic limitations of transporting the vast amount of water required onboard the ISS necessitate onboard recovery and reuse of the aqueous waste streams. Various treatment technologies are employed within the ISS water processor to render the waste water potable, including filtration, ion exchange, adsorption, and catalytic wet oxidation. The ion exchange resins and adsorption media are combined in multifiltration beds for removal of ionic and organic compounds. A mathematical model (MFBMODEL™) designed to predict the performance of a multifiltration (MF) bed was developed. MFBMODEL consists of ion exchange models for describing the behavior of the different resin types in a MF bed (e.g., mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins) and an adsorption model capable of predicting the performance of the adsorbents in a MF bed. Multicomponent ion exchange ii equilibrium models that incorporate the water formation reaction, electroneutrality condition, and degree of ionization of weak acids and bases for mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins were developed and verified. The equilibrium models developed use a tanks-inseries approach that allows for consideration of variable influent concentrations. The adsorption modeling approach was developed in related studies and application within the MFBMODEL framework was demonstrated in the Appendix to this study. MFBMODEL consists of a graphical user interface programmed in Visual Basic and Fortran computational routines. This dissertation shows MF bed modeling results in which the model is verified for a surrogate of the ISS waste shower and handwash stream. In addition, a multicomponent ion exchange model that incorporates mass transfer effects was developed, which is capable of describing the performance of strong acid cation (SAC) and strong base anion (SBA) exchange resins, but not including reaction effects. This dissertation presents results showing the mass transfer model's capability to predict the performance of binary and multicomponent column data for SAC and SBA exchange resins. The ion exchange equilibrium and mass transfer models developed in this study are also applicable to terrestrial water treatment systems. They could be applied for removal of cations and anions from groundwater (e.g., hardness, nitrate, perchlorate) and from industrial process waters (e.g. boiler water, ultrapure water in the semiconductor industry).
Resumo:
Planning, navigation, and search are fundamental human cognitive abilities central to spatial problem solving in search and rescue, law enforcement, and military operations. Despite a wealth of literature concerning naturalistic spatial problem solving in animals, literature on naturalistic spatial problem solving in humans is comparatively lacking and generally conducted by separate camps among which there is little crosstalk. Addressing this deficiency will allow us to predict spatial decision making in operational environments, and understand the factors leading to those decisions. The present dissertation is comprised of two related efforts, (1) a set of empirical research studies intended to identify characteristics of planning, execution, and memory in naturalistic spatial problem solving tasks, and (2) a computational modeling effort to develop a model of naturalistic spatial problem solving. The results of the behavioral studies indicate that problem space hierarchical representations are linear in shape, and that human solutions are produced according to multiple optimization criteria. The Mixed Criteria Model presented in this dissertation accounts for global and local human performance in a traditional and naturalistic Traveling Salesman Problem. The results of the empirical and modeling efforts hold implications for basic and applied science in domains such as problem solving, operations research, human-computer interaction, and artificial intelligence.
Resumo:
Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.
Resumo:
Our goal here is a more complete understanding of how information about luminance contrast is encoded and used by the binocular visual system. In two-interval forced-choice experiments we assessed observers' ability to discriminate changes in contrast that could be an increase or decrease of contrast in one or both eyes, or an increase in one eye coupled with a decrease in the other (termed IncDec). The base or pedestal contrasts were either in-phase or out-of-phase in the two eyes. The opposed changes in the IncDec condition did not cancel each other out, implying that along with binocular summation, information is also available from mechanisms that do not sum the two eyes' inputs. These might be monocular mechanisms. With a binocular pedestal, monocular increments of contrast were much easier to see than monocular decrements. These findings suggest that there are separate binocular (B) and monocular (L,R) channels, but only the largest of the three responses, max(L,B,R), is available to perception and decision. Results from contrast discrimination and contrast matching tasks were described very accurately by this model. Stimuli, data, and model responses can all be visualized in a common binocular contrast space, allowing a more direct comparison between models and data. Some results with out-of-phase pedestals were not accounted for by the max model of contrast coding, but were well explained by an extended model in which gratings of opposite polarity create the sensation of lustre. Observers can discriminate changes in lustre alongside changes in contrast.
Resumo:
This study presents a computational parametric analysis of DME steam reforming in a large scale Circulating Fluidized Bed (CFB) reactor. The Computational Fluid Dynamic (CFD) model used, which is based on Eulerian-Eulerian dispersed flow, has been developed and validated in Part I of this study [1]. The effect of the reactor inlet configuration, gas residence time, inlet temperature and steam to DME ratio on the overall reactor performance and products have all been investigated. The results have shown that the use of double sided solid feeding system remarkable improvement in the flow uniformity, but with limited effect on the reactions and products. The temperature has been found to play a dominant role in increasing the DME conversion and the hydrogen yield. According to the parametric analysis, it is recommended to run the CFB reactor at around 300 °C inlet temperature, 5.5 steam to DME molar ratio, 4 s gas residence time and 37,104 ml gcat -1 h-1 space velocity. At these conditions, the DME conversion and hydrogen molar concentration in the product gas were both found to be around 80%.
Resumo:
The usage of multi material structures in industry, especially in the automotive industry are increasing. To overcome the difficulties in joining these structures, adhesives have several benefits over traditional joining methods. Therefore, accurate simulations of the entire process of fracture including the adhesive layer is crucial. In this paper, material parameters of a previously developed meso mechanical finite element (FE) model of a thin adhesive layer are optimized using the Strength Pareto Evolutionary Algorithm (SPEA2). Objective functions are defined as the error between experimental data and simulation data. The experimental data is provided by previously performed experiments where an adhesive layer was loaded in monotonically increasing peel and shear. Two objective functions are dependent on 9 model parameters (decision variables) in total and are evaluated by running two FEsimulations, one is loading the adhesive layer in peel and the other in shear. The original study converted the two objective functions into one function that resulted in one optimal solution. In this study, however, a Pareto frontis obtained by employing the SPEA2 algorithm. Thus, more insight into the material model, objective functions, optimal solutions and decision space is acquired using the Pareto front. We compare the results and show good agreement with the experimental data.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Declarative techniques such as Constraint Programming can be very effective in modeling and assisting management decisions. We present a method for managing university classrooms which extends the previous design of a Constraint-Informed Information System to generate the timetables while dealing with spatial resource optimization issues. We seek to maximize space utilization along two dimensions: classroom use and occupancy rates. While we want to maximize the room use rate, we still need to satisfy the soft constraints which model students’ and lecturers’ preferences. We present a constraint logic programming-based local search method which relies on an evaluation function that combines room utilization and timetable soft preferences. Based on this, we developed a tool which we applied to the improvement of classroom allocation in a University. Comparing the results to the current timetables obtained without optimizing space utilization, the initial versions of our tool manages to reach a 30% improvement in space utilization, while preserving the quality of the timetable, both for students and lecturers.
Resumo:
Transferring distribution models between different geographical areas may be problematic, as the performance of models outside their original scope is hard to predict. A modelling procedure is needed that gets the gist of the environmental descriptors of a distribution area, without either overfitting to the training data or overestimating the species’ distribution potential.We tested the transferability power of the favourability function, a generalized linear model, on the distribution of the Iberian desman (Galemys pyrenaicus) in the Iberian territories of Portugal and Spain.We also tested the effects of two of the main potential constraints on model transferability: the analysed ranges of the predictor variables, and the completeness of the species distribution data. We modelled 10 km×10km presence/absence data from Portugal and Spain separately, extrapolated each model to the other country, and compared predictions with observations. The Spanish model, despite arguably containing more false absences, showed good predictive ability in Portugal. The Portuguese model, whose predictors ranged between only a subset of the values observed in Spain, overestimated desman distribution when transferred.We discuss possible reasons for this differential model behaviour, and highlight the importance of this kind of models for prediction and conservation applications
Resumo:
We propose a model for the quark-antiquark interaction in Minkowski space using the Covariant Spectator Theory. We show that with an equal-weighted scalar-pseudoscalar structure for the confining part of our interaction kernel the axial-vector Ward-Takahashi identity is preserved and our model complies with the Adler-zero constraint for π-π-scattering imposed by chiral symmetry.
Resumo:
Following up on earlier work on the $q\bar{q}$-bound-state problem using a covariant, chiral-symmetric formalism based upon the Covariant Spectator Theory, we study the heavy–light case for both pseudoscalar and vector mesons. Derived directly in Minkowski space, our approach approximates the full Bethe–Salpeter-equation, taking into account, effectively, the contributions of both ladder and crossed ladder diagrams in the kernel. Results for several mass spectra using a relativistic covariant generalization of a Cornell plus a constant potential to model the interquark interaction are given and discussed.