947 resultados para Mean field models
Resumo:
Wide field-of-view (FOV) microscopy is of high importance to biological research and clinical diagnosis where a high-throughput screening of samples is needed. This thesis presents the development of several novel wide FOV imaging technologies and demonstrates their capabilities in longitudinal imaging of living organisms, on the scale of viral plaques to live cells and tissues.
The ePetri Dish is a wide FOV on-chip bright-field microscope. Here we applied an ePetri platform for plaque analysis of murine norovirus 1 (MNV-1). The ePetri offers the ability to dynamically track plaques at the individual cell death event level over a wide FOV of 6 mm × 4 mm at 30 min intervals. A density-based clustering algorithm is used to analyze the spatial-temporal distribution of cell death events to identify plaques at their earliest stages. We also demonstrate the capabilities of the ePetri in viral titer count and dynamically monitoring plaque formation, growth, and the influence of antiviral drugs.
We developed another wide FOV imaging technique, the Talbot microscope, for the fluorescence imaging of live cells. The Talbot microscope takes advantage of the Talbot effect and can generate a focal spot array to scan the fluorescence samples directly on-chip. It has a resolution of 1.2 μm and a FOV of ~13 mm2. We further upgraded the Talbot microscope for the long-term time-lapse fluorescence imaging of live cell cultures, and analyzed the cells’ dynamic response to an anticancer drug.
We present two wide FOV endoscopes for tissue imaging, named the AnCam and the PanCam. The AnCam is based on the contact image sensor (CIS) technology, and can scan the whole anal canal within 10 seconds with a resolution of 89 μm, a maximum FOV of 100 mm × 120 mm, and a depth-of-field (DOF) of 0.65 mm. We also demonstrate the performance of the AnCam in whole anal canal imaging in both animal models and real patients. In addition to this, the PanCam is based on a smartphone platform integrated with a panoramic annular lens (PAL), and can capture a FOV of 18 mm × 120 mm in a single shot with a resolution of 100─140 μm. In this work we demonstrate the PanCam’s performance in imaging a stained tissue sample.
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
We propose the analog-digital quantum simulation of the quantum Rabi and Dicke models using circuit quantum electrodynamics (QED). We find that all physical regimes, in particular those which are impossible to realize in typical cavity QED setups, can be simulated via unitary decomposition into digital steps. Furthermore, we show the emergence of the Dirac equation dynamics from the quantum Rabi model when the mode frequency vanishes. Finally, we analyze the feasibility of this proposal under realistic superconducting circuit scenarios.
Resumo:
158 p.
Resumo:
Research on assessment and monitoring methods has primarily focused on fisheries with long multivariate data sets. Less research exists on methods applicable to data-poor fisheries with univariate data sets with a small sample size. In this study, we examine the capabilities of seasonal autoregressive integrated moving average (SARIMA) models to fit, forecast, and monitor the landings of such data-poor fisheries. We use a European fishery on meagre (Sciaenidae: Argyrosomus regius), where only a short time series of landings was available to model (n=60 months), as our case-study. We show that despite the limited sample size, a SARIMA model could be found that adequately fitted and forecasted the time series of meagre landings (12-month forecasts; mean error: 3.5 tons (t); annual absolute percentage error: 15.4%). We derive model-based prediction intervals and show how they can be used to detect problematic situations in the fishery. Our results indicate that over the course of one year the meagre landings remained within the prediction limits of the model and therefore indicated no need for urgent management intervention. We discuss the information that SARIMA model structure conveys on the meagre lifecycle and fishery, the methodological requirements of SARIMA forecasting of data-poor fisheries landings, and the capabilities SARIMA models present within current efforts to monitor the world’s data-poorest resources.
Resumo:
In this work we calibrate two different analytic models of semilocal strings by constraining the values of their free parameters. In order to do so, we use data obtained from the largest and most accurate field theory simulations of semilocal strings to date, and compare several key properties with the predictions of the models. As this is still work in progress, we present some preliminary results together with descriptions of the methodology we are using in the characterisation of semilocal string networks.
Resumo:
The age and growth dynamics of the spinner shark (Carcharhinus brevipinna) in the northwest Atlantic Ocean off the southeast United States and in the Gulf of Mexico were examined and four growth models were used to examine variation in the ability to fit size-at-age data. The von Bertalanffy growth model, an alternative equation of the von Bertalanffy growth model with a size-at-birth intercept, the Gompertz growth model, and a logistic model were fitted to sex-specific observed size-at-age data. Considering the statistical criteria (e.g., lowest mean square error [MSE], high coefficient-of-determination, and greatest level of significance) we desired for this study, the logistic model provided the best overall fit to the size-at-age data, whereas the von Bertalanffy growth model gave the worst. For “biological validity,” the von Bertalanffy model for female sharks provided estimates similar to those reported in other studies. However, the von Bertalanffy model was deemed inappropriate for describing the growth of male spinner sharks because estimates of theoretical maximum size (L∞) indicated a size much larger than that observed in the field. However, the growth coefficient (k= 0.14/yr) from the Gompertz model provided an estimate most similar to that reported for other large coastal species. The analysis of growth for spinner shark in the present study demonstrates the importance of fitting alternative models when standard models fit the data poorly or when growth estimates do not appear to be realistic.
Resumo:
Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control.
Resumo:
Bycatch, or the incidental catch of nontarget organisms during fi shing operations, is a major issue in U.S. shrimp trawl fisheries. Because bycatch is typically discarded at sea, total bycatch is usually estimated by extrapolating from an observed bycatch sample to the entire fleet with either mean-per-unit or ratio estimators. Using both field observations of commercial shrimp trawlers and computer simulations, I compared five methods for generating bycatch estimates that were used in past studies, a mean-per-unit estimator and four forms of the ratio estimator, respectively: 1) the mean fish catch per unit of effort, where unit effort was a proxy for sample size, 2) the mean of the individual fish to shrimp ratios, 3) the ratio of mean fish catch to mean shrimp catch, 4) the mean of the ratios of fish catch per time fished (a variable measure of effort), and 5) the ratio of mean fish catch per mean time fished. For field data, different methods used to estimate bycatch of Atlantic croaker, spot, and weakfish yielded extremely different results, with no discernible pattern in the estimates by method, geographic region, or species. Simulated fishing fleets were used to compare bycatch estimated by the fi ve methods with “actual” (simulated) bycatch. Simulations were conducted by using both normal and delta lognormal distributions of fish and shrimp and employed a range of values for several parameters, including mean catches of fish and shrimp, variability in the catches of fish and shrimp, variability in fishing effort, number of observations, and correlations between fish and shrimp catches. Results indicated that only the mean per unit estimators provided statistically unbiased estimates, while all other methods overestimated bycatch. The mean of the individual fish to shrimp ratios, the method used in the South Atlantic Bight before the 1990s, gave the most biased estimates. Because of the statistically significant two- and 3-way interactions among parameters, it is unlikely that estimates generated by one method can be converted or corrected to estimates made by another method: therefore bycatch estimates obtained with different methods should not be compared directly.
Resumo:
The yield behaviour of two aluminum alloy foams (Alporas and Duocel) has been investigated for a range of axisymmetric compressive stress states. The initial yield surface has been measured, and the evolution of the yield surface has been explored for uniaxial and hydrostatic stress paths. It is found that the hydrostatic yield strength is of similar magnitude to the uniaxial yield strength. The yield surfaces are of quadratic shape in the stress space of mean stress versus effective stress, and evolve without corner formation. Two phenomenological isotropic constitutive models for the plastic behaviour are proposed. The first is based on a geometrically self-similar yield surface while the second is more complex and allows for a change in shape of the yield surface due to differential hardening along the hydrostatic and deviatoric axes. Good agreement is observed between the experimentally measured stress versus strain responses and the predictions of the models.
Resumo:
Observation shows that the watershed-scale models in common use in the United States (US) differ from those used in the European Union (EU). The question arises whether the difference in model use is due to familiarity or necessity. Do conditions in each continent require the use of unique watershed-scale models, or are models sufficiently customizable that independent development of models that serve the same purpose (e.g., continuous/event- based, lumped/distributed, field-Awatershed-scale) is unnecessary? This paper explores this question through the application of two continuous, semi-distributed, watershed-scale models (HSPF and HBV-INCA) to a rural catchment in southern England. The Hydrological Simulation Program-Fortran (HSPF) model is in wide use in the United States. The Integrated Catchments (INCA) model has been used extensively in Europe, and particularly in England. The results of simulation from both models are presented herein. Both models performed adequately according to the criteria set for them. This suggests that there was not a necessity to have alternative, yet similar, models. This partially supports a general conclusion that resources should be devoted towards training in the use of existing models rather than development of new models that serve a similar purpose to existing models. A further comparison of water quality predictions from both models may alter this conclusion.
Resumo:
This paper presents the application of advanced compact models of the IGBT and PIN diode to the full electrothermal system simulation of a hybrid electric vehicle converter using a look-up table of device losses. The Fourier-based solution model is used, which takes account of features such as local lifetime control and field-stop technology. Device and circuit parameters are extracted from experimental waveforms and device structural data. Matching of the switching waveforms and the resulting generation of the look-up table is presented. An example of the use of the look-up tables in simulation of inverter device temperatures is also given, for a hypothetical electric vehicle subjected to an urban driving cycle. © 2006 IEEE.
Resumo:
Accurate and efficient computation of the nearest wall distance d (or level set) is important for many areas of computational science/engineering. Differential equation-based distance/ level set algorithms, such as the hyperbolic-natured Eikonal equation, have demonstrated valuable computational efficiency. Here, in the context, as an 'auxiliary' equation to the main flow equations, the Eikonal equation is solved efficiently with two different finite volume approaches (the cell vertex and cell-centered). Application of the distance solution is studied for various geometries. Moreover, a procedure using the differential field to obtain the medial axis transform (MAT) for different geometries is presented. The latter provides a skeleton representation of geometric models that has many useful analysis properties. As an alternative approach to the pure geometric methods (e.g. the Voronoi approach), the current d-MAT procedure bypasses many difficulties that are usually encountered by pure geometric methods, especially in three dimensional space. It is also shown that the d-MAT approach provides the potential to sculpt/control the MAT form for specialized solution purposes. Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
In the field of motor control, two hypotheses have been controversial: whether the brain acquires internal models that generate accurate motor commands, or whether the brain avoids this by using the viscoelasticity of musculoskeletal system. Recent observations on relatively low stiffness during trained movements support the existence of internal models. However, no study has revealed the decrease in viscoelasticity associated with learning that would imply improvement of internal models as well as synergy between the two hypothetical mechanisms. Previously observed decreases in electromyogram (EMG) might have other explanations, such as trajectory modifications that reduce joint torques. To circumvent such complications, we required strict trajectory control and examined only successful trials having identical trajectory and torque profiles. Subjects were asked to perform a hand movement in unison with a target moving along a specified and unusual trajectory, with shoulder and elbow in the horizontal plane at the shoulder level. To evaluate joint viscoelasticity during the learning of this movement, we proposed an index of muscle co-contraction around the joint (IMCJ). The IMCJ was defined as the summation of the absolute values of antagonistic muscle torques around the joint and computed from the linear relation between surface EMG and joint torque. The IMCJ during isometric contraction, as well as during movements, was confirmed to correlate well with joint stiffness estimated using the conventional method, i.e., applying mechanical perturbations. Accordingly, the IMCJ during the learning of the movement was computed for each joint of each trial using estimated EMG-torque relationship. At the same time, the performance error for each trial was specified as the root mean square of the distance between the target and hand at each time step over the entire trajectory. The time-series data of IMCJ and performance error were decomposed into long-term components that showed decreases in IMCJ in accordance with learning with little change in the trajectory and short-term interactions between the IMCJ and performance error. A cross-correlation analysis and impulse responses both suggested that higher IMCJs follow poor performances, and lower IMCJs follow good performances within a few successive trials. Our results support the hypothesis that viscoelasticity contributes more when internal models are inaccurate, while internal models contribute more after the completion of learning. It is demonstrated that the CNS regulates viscoelasticity on a short- and long-term basis depending on performance error and finally acquires smooth and accurate movements while maintaining stability during the entire learning process.
Resumo:
In this study various scalar dissipation rates and their modelling in the context of partially premixed flame are investigated. A DNS dataset of the near field of a turbulent hydrogen lifted jet flame is processed to analyse the mixture fraction and progress variable dissipation rates and their cross dissipation rate at several axial positions. It is found that the classical model for the passive scalar dissipation rate ε{lunate}̃ZZ gives good agreement with the DNS, while models developed based on premixed flames for the reactive scalar dissipation rate ε{lunate}̃cc only qualitatively capture the correct trend. The cross dissipation rate ε{lunate}̃cZ is mostly negative and can be reasonably approximated at downstream positions once ε{lunate}̃ZZ and ε{lunate}̃cc are known, although the sign cannot be determined. This approach gives better results than one employing a constant ratio of turbulent timescale and the scalar covariance c'Z'̃. The statistics of scalar gradients are further examined and lognormal distributions are shown to be very good approximations for the passive scalar and acceptable for the reactive scalar. The correlation between the two gradients increases downstream as the partially premixed flame in the near field evolves ultimately to a diffusion flame in the far field. A bivariate lognormal distribution is tested and found to be a reasonable approximation for the joint PDF of the two scalar gradients. © 2011 The Combustion Institute.