986 resultados para Equivalent-circuit model
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
We present a kinetic model for transformations between different self-assembled lipid structures. The model shows how data on the rates of phase transitions between mesophases of different geometries can be used to provide information on the mechanisms of the transformations and the transition states involved. This can be used, for example, to gain an insight into intermediate structures in cell membrane fission or fusion. In cases where the monolayer curvature changes on going from the initial to the final mesophase, we consider the phase transition to be driven primarily by the change in the relaxed curvature with pressure or temperature, which alters the relative curvature elastic energies of the two mesophase structures. Using this model, we have analyzed previously published kinetic data on the inter-conversion of inverse bicontinuous cubic phases in the 1-monoolein-30 wt% water system. The data are for a transition between QII(G) and QII(D) phases, and our analysis indicates that the transition state more closely resembles the QII(D) than the QII(G) phase. Using estimated values for the monolayer mean curvatures of the QII(G) and QII(D) phases of -0.123 nm(-1) and -0.133 nm(-1), respectively, gives values for the monolayer mean curvature of the transition state of between -0.131 nm(-1) and -0.132 nm(-1). Furthermore, we estimate that several thousand molecules undergo the phase transition cooperatively within one "cooperative unit", equivalent to 1-2 unit cells of QII(G) or 4-10 unit cells of QII(D).
Resumo:
The feature model of immediate memory (Nairne, 1990) is applied to an experiment testing individual differences in phonological confusions amongst a group (N=100) of participants performing a verbal memory test. By simulating the performance of an equivalent number of “pseudo-participants” the model fits both the mean performance and the variability within the group. Experimental data show that high-performing individuals are significantly more likely to demonstrate phonological confusions than low performance individuals and this is also true of the model, despite the model’s lack of either an explicit phonological store or a performance-linked strategy shift away from phonological storage. It is concluded that a dedicated phonological store is not necessary to explain the basic phonological confusion effect, and the reduction in such an effect can also be explained without requiring a change in encoding or rehearsal strategy or the deployment of a different storage buffer.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.
Resumo:
The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work
Resumo:
The initial condition effect on climate prediction skill over a 2-year hindcast time-scale has been assessed from ensemble HadCM3 climate model runs using anomaly initialization over the period 1990–2001, and making comparisons with runs without initialization (equivalent to climatological conditions), and to anomaly persistence. It is shown that the assimilation improves the prediction skill in the first year globally, and in a number of limited areas out into the second year. Skill in hindcasting surface air temperature anomalies is most marked over ocean areas, and is coincident with areas of high sea surface temperature and ocean heat content skill. Skill improvement over land areas is much more limited but is still detectable in some cases. We found little difference in the skill of hindcasts using three different sets of ocean initial conditions, and we obtained the best results by combining these to form a grand ensemble hindcast set. Results are also compared with the idealized predictability studies of Collins (Clim. Dynam. 2002; 19: 671–692), which used the same model. The maximum lead time for which initialization gives enhanced skill over runs without initialization varies in different regions but is very similar to lead times found in the idealized studies, therefore strongly supporting the process representation in the model as well as its use for operational predictions. The limited 12-year period of the study, however, means that the regional details of model skill should probably be further assessed under a wider range of observational conditions.
Resumo:
The structure of turbulence in the ocean surface layer is investigated using a simplified semi-analytical model based on rapid-distortion theory. In this model, which is linear with respect to the turbulence, the flow comprises a mean Eulerian shear current, the Stokes drift of an irrotational surface wave, which accounts for the irreversible effect of the waves on the turbulence, and the turbulence itself, whose time evolution is calculated. By analysing the equations of motion used in the model, which are linearised versions of the Craik–Leibovich equations containing a ‘vortex force’, it is found that a flow including mean shear and a Stokes drift is formally equivalent to a flow including mean shear and rotation. In particular, Craik and Leibovich’s condition for the linear instability of the first kind of flow is equivalent to Bradshaw’s condition for the linear instability of the second. However, the present study goes beyond linear stability analyses by considering flow disturbances of finite amplitude, which allows calculating turbulence statistics and addressing cases where the linear stability is neutral. Results from the model show that the turbulence displays a structure with a continuous variation of the anisotropy and elongation, ranging from streaky structures, for distortion by shear only, to streamwise vortices resembling Langmuir circulations, for distortion by Stokes drift only. The TKE grows faster for distortion by a shear and a Stokes drift gradient with the same sign (a situation relevant to wind waves), but the turbulence is more isotropic in that case (which is linearly unstable to Langmuir circulations).
Resumo:
Climate models predict a large range of possible future temperatures for a particular scenario of future emissions of greenhouse gases and other anthropogenic forcings of climate. Given that further warming in coming decades could threaten increasing risks of climatic disruption, it is important to determine whether model projections are consistent with temperature changes already observed. This can be achieved by quantifying the extent to which increases in well mixed greenhouse gases and changes in other anthropogenic and natural forcings have already altered temperature patterns around the globe. Here, for the first time, we combine multiple climate models into a single synthesized estimate of future warming rates consistent with past temperature changes. We show that the observed evolution of near-surface temperatures appears to indicate lower ranges (5–95%) for warming (0.35–0.82 K and 0.45–0.93 K by the 2020s (2020–9) relative to 1986–2005 under the RCP4.5 and 8.5 scenarios respectively) than the equivalent ranges projected by the CMIP5 climate models (0.48–1.00 K and 0.51–1.16 K respectively). Our results indicate that for each RCP the upper end of the range of CMIP5 climate model projections is inconsistent with past warming.
Resumo:
The surface mass balance for Greenland and Antarctica has been calculated using model data from an AMIP-type experiment for the period 1979–2001 using the ECHAM5 spectral transform model at different triangular truncations. There is a significant reduction in the calculated ablation for the highest model resolution, T319 with an equivalent grid distance of ca 40 km. As a consequence the T319 model has a positive surface mass balance for both ice sheets during the period. For Greenland, the models at lower resolution, T106 and T63, on the other hand, have a much stronger ablation leading to a negative surface mass balance. Calculations have also been undertaken for a climate change experiment using the IPCC scenario A1B, with a T213 resolution (corresponding to a grid distance of some 60 km) and comparing two 30-year periods from the end of the twentieth century and the end of the twenty-first century, respectively. For Greenland there is change of 495 km3/year, going from a positive to a negative surface mass balance corresponding to a sea level rise of 1.4 mm/year. For Antarctica there is an increase in the positive surface mass balance of 285 km3/year corresponding to a sea level fall by 0.8 mm/year. The surface mass balance changes of the two ice sheets lead to a sea level rise of 7 cm at the end of this century compared to end of the twentieth century. Other possible mass losses such as due to changes in the calving of icebergs are not considered. It appears that such changes must increase significantly, and several times more than the surface mass balance changes, if the ice sheets are to make a major contribution to sea level rise this century. The model calculations indicate large inter-annual variations in all relevant parameters making it impossible to identify robust trends from the examined periods at the end of the twentieth century. The calculated inter-annual variations are similar in magnitude to observations. The 30-year trend in SMB at the end of the twenty-first century is significant. The increase in precipitation on the ice sheets follows closely the Clausius-Clapeyron relation and is the main reason for the increase in the surface mass balance of Antarctica. On Greenland precipitation in the form of snow is gradually starting to decrease and cannot compensate for the increase in ablation. Another factor is the proportionally higher temperature increase on Greenland leading to a larger ablation. It follows that a modest increase in temperature will not be sufficient to compensate for the increase in accumulation, but this will change when temperature increases go beyond any critical limit. Calculations show that such a limit for Greenland might well be passed during this century. For Antarctica this will take much longer and probably well into following centuries.
Resumo:
Geophysical fluid models often support both fast and slow motions. As the dynamics are often dominated by the slow motions, it is desirable to filter out the fast motions by constructing balance models. An example is the quasi geostrophic (QG) model, which is used widely in meteorology and oceanography for theoretical studies, in addition to practical applications such as model initialization and data assimilation. Although the QG model works quite well in the mid-latitudes, its usefulness diminishes as one approaches the equator. Thus far, attempts to derive similar balance models for the tropics have not been entirely successful as the models generally filter out Kelvin waves, which contribute significantly to tropical low-frequency variability. There is much theoretical interest in the dynamics of planetary-scale Kelvin waves, especially for atmospheric and oceanic data assimilation where observations are generally only of the mass field and thus do not constrain the wind field without some kind of diagnostic balance relation. As a result, estimates of Kelvin wave amplitudes can be poor. Our goal is to find a balance model that includes Kelvin waves for planetary-scale motions. Using asymptotic methods, we derive a balance model for the weakly nonlinear equatorial shallow-water equations. Specifically we adopt the ‘slaving’ method proposed by Warn et al. (Q. J. R. Meteorol. Soc., vol. 121, 1995, pp. 723–739), which avoids secular terms in the expansion and thus can in principle be carried out to any order. Different from previous approaches, our expansion is based on a long-wave scaling and the slow dynamics is described using the height field instead of potential vorticity. The leading-order model is equivalent to the truncated long-wave model considered previously (e.g. Heckley & Gill, Q. J. R. Meteorol. Soc., vol. 110, 1984, pp. 203–217), which retains Kelvin waves in addition to equatorial Rossby waves. Our method allows for the derivation of higher-order models which significantly improve the representation of Rossby waves in the isotropic limit. In addition, the ‘slaving’ method is applicable even when the weakly nonlinear assumption is relaxed, and the resulting nonlinear model encompasses the weakly nonlinear model. We also demonstrate that the method can be applied to more realistic stratified models, such as the Boussinesq model.
Resumo:
Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.
Resumo:
This paper presents an in-depth critical discussion and derivation of a detailed small-signal analysis of the Phase-Shifted Full-Bridge (PSFB) converter. Circuit parasitics, resonant inductance and transformer turns ratio have all been taken into account in the evaluation of this topology’s open-loop control-to-output, line-to-output and load-to-output transfer functions. Accordingly, the significant impact of losses and resonant inductance on the converter’s transfer functions is highlighted. The enhanced dynamic model proposed in this paper enables the correct design of the converter compensator, including the effect of parasitics on the dynamic behavior of the PSFB converter. Detailed experimental results for a real-life 36V-to-14V/10A PSFB industrial application show excellent agreement with the predictions from the model proposed herein.1
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.