968 resultados para Parameter space


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Double-pulse tests are commonly used as a method for assessing the switching performance of power semiconductor switches in a clamped inductive switching application. Data generated from these tests are typically in the form of sampled waveform data captured using an oscilloscope. In cases where it is of interest to explore a multi-dimensional parameter space and corresponding result space it is necessary to reduce the data into key performance metrics via feature extraction. This paper presents techniques for the extraction of switching performance metrics from sampled double-pulse waveform data. The reported techniques are applied to experimental data from characterisation of a cascode gate drive circuit applied to power MOSFETs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We prove the existence of novel, shock-fronted travelling wave solutions to a model of wound healing angiogenesis studied in Pettet et al (2000 IMA J. Math. App. Med. 17 395–413) assuming two conjectures hold. In the previous work, the authors showed that for certain parameter values, a heteroclinic orbit in the phase plane representing a smooth travelling wave solution exists. However, upon varying one of the parameters, the heteroclinic orbit was destroyed, or rather cut-off, by a wall of singularities in the phase plane. As a result, they concluded that under this parameter regime no travelling wave solutions existed. Using techniques from geometric singular perturbation theory and canard theory, we show that a travelling wave solution actually still exists for this parameter regime. We construct a heteroclinic orbit passing through the wall of singularities via a folded saddle canard point onto a repelling slow manifold. The orbit leaves this manifold via the fast dynamics and lands on the attracting slow manifold, finally connecting to its end state. This new travelling wave is no longer smooth but exhibits a sharp front or shock. Finally, we identify regions in parameter space where we expect that similar solutions exist. Moreover, we discuss the possibility of more exotic solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In vitro cell biology assays play a crucial role in informing our understanding of the migratory, proliferative and invasive properties of many cell types in different biological contexts. While mono-culture assays involve the study of a population of cells composed of a single cell type, co-culture assays study a population of cells composed of multiple cell types (or subpopulations of cells). Such co-culture assays can provide more realistic insights into many biological processes including tissue repair, tissue regeneration and malignant spreading. Typically, system parameters, such as motility and proliferation rates, are estimated by calibrating a mathematical or computational model to the observed experimental data. However, parameter estimates can be highly sensitive to the choice of model and modelling framework. This observation motivates us to consider the fundamental question of how we can best choose a model to facilitate accurate parameter estimation for a particular assay. In this work we describe three mathematical models of mono-culture and co-culture assays that include different levels of spatial detail. We study various spatial summary statistics to explore if they can be used to distinguish between the suitability of each model over a range of parameter space. Our results for mono-culture experiments are promising, in that we suggest two spatial statistics that can be used to direct model choice. However, co-culture experiments are far more challenging: we show that these same spatial statistics which provide useful insight into mono-culture systems are insuffcient for co-culture systems. Therefore, we conclude that great care ought to be exercised when estimating the parameters of co-culture assays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

E. coli does chemotaxis by performing a biased random walk composed of alternating periods of swimming (runs) and reorientations (tumbles). Tumbles are typically modelled as complete directional randomisations but it is known that in wild type E. coli, successive run directions are actually weakly correlated, with a mean directional difference of ∼63°. We recently presented a model of the evolution of chemotactic swimming strategies in bacteria which is able to quantitatively reproduce the emergence of this correlation. The agreement between model and experiments suggests that directional persistence may serve some function, a hypothesis supported by the results of an earlier model. Here we investigate the effect of persistence on chemotactic efficiency, using a spatial Monte Carlo model of bacterial swimming in a gradient, combined with simulations of natural selection based on chemotactic efficiency. A direct search of the parameter space reveals two attractant gradient regimes, (a) a low-gradient regime, in which efficiency is unaffected by directional persistence and (b) a high-gradient regime, in which persistence can improve chemotactic efficiency. The value of the persistence parameter that maximises this effect corresponds very closely with the value observed experimentally. This result is matched by independent simulations of the evolution of directional memory in a population of model bacteria, which also predict the emergence of persistence in high-gradient conditions. The relationship between optimality and persistence in different environments may reflect a universal property of random-walk foraging algorithms, which must strike a compromise between two competing aims: exploration and exploitation. We also present a new graphical way to generally illustrate the evolution of a particular trait in a population, in terms of variations in an evolvable parameter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Functional MRI studies commonly refer to activation patterns as being localized in specific Brodmann areas, referring to Brodmann’s divisions of the human cortex based on cytoarchitectonic boundaries [3]. Typically, Brodmann areas that match regions in the group averaged functional maps are estimated by eye, leading to inaccurate parcellations and significant error. To avoid this limitation, we developed a method using high-dimensional nonlinear registration to project the Brodmann areas onto individual 3D co-registered structural and functional MRI datasets, using an elastic deformation vector field in the cortical parameter space. Based on a sulcal pattern matching approach [11], an N=27 scan single subject atlas (the Colin Holmes atlas [15]) with associated Brodmann areas labeled on its surface, was deformed to match 3D cortical surface models generated from individual subjects’ structural MRIs (sMRIs). The deformed Brodmann areas were used to quantify and localize functional MRI (fMRI) BOLD activation during the performance of the Tower of London task [7].

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acid hydrolysis is a popular pretreatment for removing hemicellulose from lignocelluloses in order to produce a digestible substrate for enzymatic saccharification. In this work, a novel model for the dilute acid hydrolysis of hemicellulose within sugarcane bagasse is presented and calibrated against experimental oligomer profiles. The efficacy of mathematical models as hydrolysis yield predictors and as vehicles for investigating the mechanisms of acid hydrolysis is also examined. Experimental xylose, oligomer (degree of polymerisation 2 to 6) and furfural yield profiles were obtained for bagasse under dilute acid hydrolysis conditions at temperatures ranging from 110C to 170C. Population balance kinetics, diffusion and porosity evolution were incorporated into a mathematical model of the acid hydrolysis of sugarcane bagasse. This model was able to produce a good fit to experimental xylose yield data with only three unknown kinetic parameters ka, kb and kd. However, fitting this same model to an expanded data set of oligomeric and furfural yield profiles did not successfully reproduce the experimental results. It was found that a ``hard-to-hydrolyse'' parameter, $\alpha$, was required in the model to ensure reproducibility of the experimental oligomer profiles at 110C, 125C and 140C. The parameters obtained through the fitting exercises at lower temperatures were able to be used to predict the oligomer profiles at 155C and 170C with promising results. The interpretation of kinetic parameters obtained by fitting a model to only a single set of data may be ambiguous. Although these parameters may correctly reproduce the data, they may not be indicative of the actual rate parameters, unless some care has been taken to ensure that the model describes the true mechanisms of acid hydrolysis. It is possible to challenge the robustness of the model by expanding the experimental data set and hence limiting the parameter space for the fitting parameters. The novel combination of ``hard-to-hydrolyse'' and population balance dynamics in the model presented here appears to stand up to such rigorous fitting constraints.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Nicotiana benthamiana is an allo-tetraploid plant, which can be challenging for de novo transcriptome assemblies due to homeologous and duplicated gene copies. Transcripts generated from such genes can be distinct yet highly similar in sequence, with markedly differing expression levels. This can lead to unassembled, partially assembled or mis-assembled contigs. Due to the different properties of de novo assemblers, no one assembler with any one given parameter space can re-assemble all possible transcripts from a transcriptome. Results In an effort to maximise the diversity and completeness of de novo assembled transcripts, we utilised four de novo transcriptome assemblers, TransAbyss, Trinity, SOAPdenovo-Trans, and Oases, using a range of k-mer sizes and different input RNA-seq read counts. We complemented the parameter space biologically by using RNA from 10 plant tissues. We then combined the output of all assemblies into a large super-set of sequences. Using a method from the EvidentialGene pipeline, the combined assembly was reduced from 9.9 million de novo assembled transcripts to about 235,000 of which about 50,000 were classified as primary. Metrics such as average bit-scores, feature response curves and the ability to distinguish paralogous or homeologous transcripts, indicated that the EvidentialGene processed assembly was of high quality. Of 35 RNA silencing gene transcripts, 34 were identified as assembled to full length, whereas in a previous assembly using only one assembler, 9 of these were partially assembled. Conclusions To achieve a high quality transcriptome, it is advantageous to implement and combine the output from as many different de novo assemblers as possible. We have in essence taking the ‘best’ output from each assembler while minimising sequence redundancy. We have also shown that simultaneous assessment of a variety of metrics, not just focused on contig length, is necessary to gauge the quality of assemblies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective To discuss generalized estimating equations as an extension of generalized linear models by commenting on the paper of Ziegler and Vens "Generalized Estimating Equations. Notes on the Choice of the Working Correlation Matrix". Methods Inviting an international group of experts to comment on this paper. Results Several perspectives have been taken by the discussants. Econometricians have established parallels to the generalized method of moments (GMM). Statisticians discussed model assumptions and the aspect of missing data Applied statisticians; commented on practical aspects in data analysis. Conclusions In general, careful modeling correlation is encouraged when considering estimation efficiency and other implications, and a comparison of choosing instruments in GMM and generalized estimating equations, (GEE) would be worthwhile. Some theoretical drawbacks of GEE need to be further addressed and require careful analysis of data This particularly applies to the situation when data are missing at random.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark-energy equation-of-state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. Methods We implemented an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-redshift galaxies. Results Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae alone; the parameter uncertainties are underestimated by 10%. The weak-lensing field-to-field variance between 1 deg2-MegaCam pointings is 5-15% higher than predicted from N-body simulations. We find no bias in the lensing signal at high redshift, within the framework of a simple model, and marginalising over cosmological parameters. Assuming a systematic underestimation of the lensing signal, the normalisation increases by up to 8%. Combining all three probes we obtain -0.10 < 1 + w < 0.06 at 68% confidence ( -0.18 < 1 + w < 0.12 at 95%), including systematic errors. Our results are therefore consistent with the cosmological constant . Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The properties of the generalized survival probability, that is, the probability of not crossing an arbitrary location R during relaxation, have been investigated experimentally (via scanning tunneling microscope observations) and numerically. The results confirm that the generalized survival probability decays exponentially with a time constant tau(s)(R). The distance dependence of the time constant is shown to be tau(s)(R)=tau(s0)exp[-R/w(T)], where w(2)(T) is the material-dependent mean-squared width of the step fluctuations. The result reveals the dependence on the physical parameters of the system inherent in the prior prediction of the time constant scaling with R/L-alpha, with L the system size and alpha the roughness exponent. The survival behavior is also analyzed using a contrasting concept, the generalized inside survival S-in(t,R), which involves fluctuations to an arbitrary location R further from the average. Numerical simulations of the inside survival probability also show an exponential time dependence, and the extracted time constant empirically shows (R/w)(lambda) behavior, with lambda varying over 0.6 to 0.8 as the sampling conditions are changed. The experimental data show similar behavior, and can be well fit with lambda=1.0 for T=300 K, and 0.5 parameter space of the experimental observations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine the 2D plane-­strain deformation of initially round, matrix-­bonded, deformable single inclusions in isothermal simple shear using a recently introduced hyperelastoviscoplastic rheology. The broad parameter space spanned by the wide range of effective viscosities, yield stresses, relaxation times, and strain rates encountered in the ductile lithosphere is explored systematically for weak and strong inclusions, the effective viscosity of which varies with respect to the matrix. Most inclusion studies to date focused on elastic or purely viscous rheologies. Comparing our results with linear-­viscous inclusions in a linear-­viscous matrix, we observe significantly different shape evolution of weak and strong inclusions over most of the relevant parameter space. The evolution of inclusion inclination relative to the shear plane is more strongly affected by elastic and plastic contributions to rheology in the case of strong inclusions. In addition, we found that strong inclusions deform in the transient viscoelastic stress regime at high Weissenberg numbers (≥0.01) up to bulk shear strains larger than 3. Studies using the shapes of deformed objects for finite-­strain analysis or viscosity-­ratio estimation should establish carefully which rheology and loading conditions reflect material and deformation properties. We suggest that relatively strong, deformable clasts in shear zones retain stored energy up to fairly high shear strains. Hence, purely viscous models of clast deformation may overlook an important contribution to the energy budget, which may drive dissipation processes within and around natural inclusions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A preliminary study of self-interrupted regenerative turning is performed in this paper. To facilitate the analysis, a new approach is proposed to model the regenerative effect in metal cutting. This model automatically incorporates the multiple-regenerative effects accompanying self-interrupted cutting. Some lower dimensional ODE approximations are obtained for this model using Galerkin projections. Using these ODE approximations, a bifurcation diagram of the regenerative turning process is obtained. It is found that the unstable branch resulting from the subcritical Hopf bifurcation meets the stable branch resulting from the self-interrupted dynamics in a turning point bifurcation. Using a rough analytical estimate of the turning point tool displacement, we can identify regions in the cutting parameter space where loss of stability leads to much greater amplitude self-interrupted motions than in some other regions.