987 resultados para STIFFLY-STABLE METHODS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following properties of the core of a one well-known: (i) the core is non-empty; (ii) the core is a lattice; and (iii) the set of unmatched agents is identical for any two matchings belonging to the core. The literature on two-sided matching focuses almost exclusively on the core and studies extensively its properties. Our main result is the following characterization of (von Neumann-Morgenstern) stable sets in one-to-one matching problem only if it is a maximal set satisfying the following properties : (a) the core is a subset of the set; (b) the set is a lattice; (c) the set of unmatched agents is identical for any two matchings belonging to the set. Furthermore, a set is a stable set if it is the unique maximal set satisfying properties (a), (b) and (c). We also show that our main result does not extend from one-to-one matching problems to many-to-one matching problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two simple and sensitive spectrophotometric methods(A and B) in the visible region have been developed for the determination of cefotaxime sodium (DFTS) in bulk and in dosage forms. Method A is based on the reaction of CFTS with nitrous acid under alkaline conditions to form a stable violet colored chromogen with absorption maximum of 560 nm and method B is based on the reaction of CFTS with1,10-phenanthroline and ferric chloride to form a red colored chromogen with the absorption maximum of 520 mm.The color obeyed Beer’s law in the concentration range of 100-500 µg/ml for method A and 1.6-16 µg/ml for method B, respectively.When pharmaceutical preparations containing CFTS were analysed, the results obtained by the proposed methods are in good agreement with the labeled amounts and are comparable with the results obtained using a UV spectrophotometric method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Routh-stability method is employed to reduce the order of discrete-time system transfer functions. It is shown that the Routh approximant is well suited to reduce both the denominator and the numerator polynomials, although alternative methods, such as PadÃ�Â(c)-Markov approximation, are also used to fit the model numerator coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

M-type barium hexaferrite (BaM) is a hard ferrite, crystallizing in space group P6(3)/mmc possessing a hexagonal magneto-plumbite structure, which consists of alternate hexagonal and spinel blocks. The structure of BaM is thus related to those of garnet and spinel ferrite. However the material has proved difficult to synthesize. By taking into account the presence of the spinel block in barium hexagonal ferrite, highly efficient new synthetic methods were devised with routes significantly different from existing ones. These successful variations in synthetic methods have been derived by taking into account a detailed investigation of the structural features of barium hexagonal ferrite and the least change principle whereby configuration changes are kept to a minimum. Thus considering the relevant mechanisms has helped to improve the synthesis efficiencies for both hydrothermal and co-precipitation methods by choosing conditions that invoke the formation of the cubic block or the less stable Fe3O4. The role played by BaFe2O4 in the synthesis is also discussed. The distribution of iron from reactants or intermediates among different sites was also successfully explained. The proposed mechanisms are based on the principle that the cubic block must be self-assembled to form the final product. Thus, it is believed that these formulated mechanisms should be helpful in designing experiments to obtain a deeper understanding of the synthesis process and to investigate the substitution of magnetic ions with doping ions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Integrin-linked kinase (ILK) and its associated complex of proteins are involved in many cellular activation processes, including cell adhesion and integrin signaling. We have previously demonstrated that mice with induced platelet ILK deficiency show reduced platelet activation and aggregation, but only a minor bleeding defect. Here, we explore this apparent disparity between the cellular and hemostatic phenotypes. METHODS: The impact of ILK inhibition on integrin αII b β3 activation and degranulation was assessed with the ILK-specific inhibitor QLT0267, and a conditional ILK-deficient mouse model was used to assess the impact of ILK deficiency on in vivo platelet aggregation and thrombus formation. RESULTS: Inhibition of ILK reduced the rate of both fibrinogen binding and α-granule secretion, but was accompanied by only a moderate reduction in the maximum extent of platelet activation or aggregation in vitro. The reduction in the rate of fibrinogen binding occurred prior to degranulation or translocation of αII b β3 to the platelet surface. The change in the rate of platelet activation in the absence of functional ILK led to a reduction in platelet aggregation in vivo, but did not change the size of thrombi formed following laser injury of the cremaster arteriole wall in ILK-deficient mice. It did, however, result in a marked decrease in the stability of thrombi formed in ILK-deficient mice. CONCLUSION: Taken together, the findings of this study indicate that, although ILK is not essential for platelet activation, it plays a critical role in facilitating rapid platelet activation, which is essential for stable thrombus formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As part of an international intercomparison project, a set of single column models (SCMs) and cloud-resolving models (CRMs) are run under the weak temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Understanding the behaviour and ecology of large carnivores is becoming increasingly important as the list of endangered species grows, with felids such as Panthera leo in some locations heading dangerously close to extinction in the wild. In order to have more reliable and effective tools to understand animal behaviour, movement and diet, we need to develop novel, integrated approaches and effective techniques to capture a detailed profile of animal foraging and movement patterns. 2. Ecological studies have shown considerable interest in using stable isotope methods, both to investigate the nature of animal feeding habits, and to map their geographical location. However, recent work has suggested that stable isotope analyses of felid fur and bone is very complex and does not correlate directly with the isotopic composition of precipitation (and hence geographical location). 3. We present new data that suggest these previous findings may be atypical, and demonstrate that isotope analyses of Felidae are suitable for both evaluating dietary inputs and establishing geo-location as they have strong environmental referents to both food and water. These data provide new evidence of an important methodology that can be applied to the family Felidae for future research in ecology, conservation, wildlife forensics and archaeological science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the results concerning the degradation of the pesticide carbaryl comparing two methods: electrochemical (EC) and photo-assisted electrochemical (PAEC). The experimental variables of applied current density, electrolyte flow-rate and initial carbaryl concentration were investigated. The results demonstrate that the electrochemical degradation of carbaryl was greatly enhanced when simultaneous UV light was applied. The greatest difference between the PAEC and EC method was apparent when lower current densities were applied. The extent of COD removal was much enhanced for the combined method, independent of the applied current density. It should be noted that the complete removal of carbaryl was achieved with out the need to add NaCl to the reaction mixture, avoiding the risk of chlorinated organic species formation. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There usually exist diverse variations in face images taken under uncontrolled conditions. Most previous work on face recognition focuses on particular variations and usually assume the absence of others. Such work is called controlled face recognition. Instead of the ‘divide and conquer’ strategy adopted by controlled face recognition, this paper presents one of the first attempts directly aiming at uncontrolled face recognition. The solution is based on Individual Stable Neural Network (ISNN) proposed in this paper. ISNN can map a face image into the so-called Individual Stable Space (ISS), the feature space that only expresses personal characteristics, which is the only useful information for recognition. There are no restrictions for the face images fed into ISNN. Moreover, unlike many other robust face recognition methods, ISNN does not require any extra information (such as view angle) other than the personal identities during training. These advantages of ISNN make it a very practical approach for uncontrolled face recognition. In the experiments, ISNN is tested on two large face databases with vast variations and achieves the best performance compared with several popular face recognition techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the accuracy of current recommended guidelines for prescribing exercise intensity using the methods of percentage of heart rate reserve (%HRR), percentage of VO2 peak (%VO2peak) and percentage of VO2 reserve (%VO2R) in a clinical population of chronic heart failure (CHF) patients. The precision of prescription of exercise intensity for 45 patients with stable CHF (39:6 M:F, 65±9 yrs (mean±SD)) was investigated. VO2peak testing is relatively common among patients with cardiac disease, but the assessment of VO2rest is not common practice and the accepted standard value of 3.5 mL/kg/min is assumed in the application of %VO2R (%VO2R3.5). In this study, VO2rest was recorded for 3 min prior to the start of a symptom-limited exercise test on a cycle ergometer. Target exercise intensities were calculated using the VO2 corresponding to 50 or 80 %HRR, VO2peak and VO2R. The VO2 values were then converted into prescribed speeds on a treadmill in km/hr at 1 %grade using ACSM’s metabolic equation for walking. Target intensities and prescribed treadmill speeds were also calculated with the %VO2R method using the mean VO2rest value of participants (3.9 mL/kg/min) (%VO2R3.9). This was then compared to the exercise intensities and prescribed treadmill speeds using patient’s measured VO2rest. Error in prescription correlates the difference between %VO2R3.5 and %VO2R3.9 compared to %VO2R with measured VO2rest. Prescription of exercise intensity through the %HRR method is imprecise for patients on medications that blunt the HR response to exercise. %VO2R method offers a significant improvement in exercise prescription compared to %VO2peak. However, a disparity of 10 % still exists in the %VO2R method using the standard 3.5 mL/kg/min for VO2rest in the %VO2R equation. The mean measured VO2rest in the 45 CHF patients was 11 % higher (3.9±0.8 mL/kg/min) than the standard value provided by ACSM. Applying the mean measured VO2rest value of 3.9 mL/kg/min rather than the standard assumed value of 3.5 mL/kg/min proved to be closer to the prescribed intensity determined by the actual measured resting VO2. These results suggest that the %HRR method should not be used to prescribe exercise intensity for CHF patients. Instead, VO2 should be used to prescribe exercise intensity and be expressed as %VO2R with measured variables (VO2rest and VO2peak).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methods: Subjects were N = 580 patients with rheumatism, asthma, orthopedic conditions or inflammatory bowel disease, who filled out the heiQ™ at the beginning, the end of and 3 months after a disease-specific inpatient rehabilitation program in Germany. Structural equation modeling techniques were used to estimate latent trait-change models and test for measurement invariance in each heiQ™ scale. Coefficients of consistency, occasion specificity and reliability were computed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern healthcare is getting reshaped by growing Electronic Medical Records (EMR). Recently, these records have been shown of great value towards building clinical prediction models. In EMR data, patients' diseases and hospital interventions are captured through a set of diagnoses and procedures codes. These codes are usually represented in a tree form (e.g. ICD-10 tree) and the codes within a tree branch may be highly correlated. These codes can be used as features to build a prediction model and an appropriate feature selection can inform a clinician about important risk factors for a disease. Traditional feature selection methods (e.g. Information Gain, T-test, etc.) consider each variable independently and usually end up having a long feature list. Recently, Lasso and related l1-penalty based feature selection methods have become popular due to their joint feature selection property. However, Lasso is known to have problems of selecting one feature of many correlated features randomly. This hinders the clinicians to arrive at a stable feature set, which is crucial for clinical decision making process. In this paper, we solve this problem by using a recently proposed Tree-Lasso model. Since, the stability behavior of Tree-Lasso is not well understood, we study the stability behavior of Tree-Lasso and compare it with other feature selection methods. Using a synthetic and two real-world datasets (Cancer and Acute Myocardial Infarction), we show that Tree-Lasso based feature selection is significantly more stable than Lasso and comparable to other methods e.g. Information Gain, ReliefF and T-test. We further show that, using different types of classifiers such as logistic regression, naive Bayes, support vector machines, decision trees and Random Forest, the classification performance of Tree-Lasso is comparable to Lasso and better than other methods. Our result has implications in identifying stable risk factors for many healthcare problems and therefore can potentially assist clinical decision making for accurate medical prognosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating the degree of individual specialisation is likely to be sensitive to the methods used, as they record individuals' resource use over different time-periods. We combined animal-borne video cameras, GPS/TDR loggers and stable isotope values of plasma, red cells and sub-sampled whiskers to investigate individual foraging specialisation in female Australian fur seals (Arctocephalus pusillus doriferus) over various timescales. Combining these methods enabled us to (1) provide quantitative information on individuals' diet, allowing the identification of prey, (2) infer the temporal consistency of individual specialisation, and (3) assess how different methods and timescales affect our estimation of the degree of specialisation. Short-term inter-individual variation in diet was observed in the video data (mean pairwise overlap = 0.60), with the sampled population being composed of both generalist and specialist individuals (nested network). However, the brevity of the temporal window is likely to artificially increase the level of specialisation by not recording the entire diet of seals. Indeed, the correlation in isotopic values was tighter between the red cells and whiskers (mid- to long-term foraging ecology) than between plasma and red cells (short- to mid-term) (R (2) = 0.93-0.73 vs. 0.55-0.41). δ(13)C and δ(15)N values of whiskers confirmed the temporal consistency of individual specialisation. Variation in isotopic niche was consistent across seasons and years, indicating long-term habitat (WIC/TNW = 0.28) and dietary (WIC/TNW = 0.39) specialisation. The results also highlight time-averaging issues (under-estimation of the degree of specialisation) when calculating individual specialisation indices over long time-periods, so that no single timescale may provide a complete and accurate picture, emphasising the benefits of using complementary methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The support vector machine (SVM) is a popular method for classification, well known for finding the maximum-margin hyperplane. Combining SVM with l1-norm penalty further enables it to simultaneously perform feature selection and margin maximization within a single framework. However, l1-norm SVM shows instability in selecting features in presence of correlated features. We propose a new method to increase the stability of l1-norm SVM by encouraging similarities between feature weights based on feature correlations, which is captured via a feature covariance matrix. Our proposed method can capture both positive and negative correlations between features. We formulate the model as a convex optimization problem and propose a solution based on alternating minimization. Using both synthetic and real-world datasets, we show that our model achieves better stability and classification accuracy compared to several state-of-the-art regularized classification methods.