992 resultados para Multistage stochastic linear programs
Resumo:
1. Cluster analysis of reference sites with similar biota is the initial step in creating River Invertebrate Prediction and Classification System (RIVPACS) and similar river bioassessment models such as Australian River Assessment System (AUSRIVAS). This paper describes and tests an alternative prediction method, Assessment by Nearest Neighbour Analysis (ANNA), based on the same philosophy as RIVPACS and AUSRIVAS but without the grouping step that some people view as artificial. 2. The steps in creating ANNA models are: (i) weighting the predictor variables using a multivariate approach analogous to principal axis correlations, (ii) calculating the weighted Euclidian distance from a test site to the reference sites based on the environmental predictors, (iii) predicting the faunal composition based on the nearest reference sites and (iv) calculating an observed/expected (O/E) analogous to RIVPACS/AUSRIVAS. 3. The paper compares AUSRIVAS and ANNA models on 17 datasets representing a variety of habitats and seasons. First, it examines each model's regressions for Observed versus Expected number of taxa, including the r(2), intercept and slope. Second, the two models' assessments of 79 test sites in New Zealand are compared. Third, the models are compared on test and presumed reference sites along a known trace metal gradient. Fourth, ANNA models are evaluated for western Australia, a geographically distinct region of Australia. The comparisons demonstrate that ANNA and AUSRIVAS are generally equivalent in performance, although ANNA turns out to be potentially more robust for the O versus E regressions and is potentially more accurate on the trace metal gradient sites. 4. The ANNA method is recommended for use in bioassessment of rivers, at least for corroborating the results of the well established AUSRIVAS- and RIVPACS-type models, if not to replace them.
Resumo:
We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success.
Resumo:
Modeling volatile organic compounds (voc`s) adsorption onto cup-stacked carbon nanotubes (cscnt) using the linear driving force model. Volatile organic compounds (VOC`s) are an important category of air pollutants and adsorption has been employed in the treatment (or simply concentration) of these compounds. The current study used an ordinary analytical methodology to evaluate the properties of a cup-stacked nanotube (CSCNT), a stacking morphology of truncated conical graphene, with large amounts of open edges on the outer surface and empty central channels. This work used a Carbotrap bearing a cup-stacked structure (composite); for comparison, Carbotrap was used as reference (without the nanotube). The retention and saturation capacities of both adsorbents to each concentration used (1, 5, 20 and 35 ppm of toluene and phenol) were evaluated. The composite performance was greater than Carbotrap; the saturation capacities for the composite was 67% higher than Carbotrap (average values). The Langmuir isotherm model was used to fit equilibrium data for both adsorbents, and a linear driving force model (LDF) was used to quantify intraparticle adsorption kinetics. LDF was suitable to describe the curves.
Resumo:
The goal of this paper is to study the global existence of small data solutions to the Cauchy problem for the nonlinear wave equation u(tt) - a(t)(2) Delta u = u(t)(2) - a(t)(2)vertical bar del u vertical bar(2). In particular we are interested in statements for the 1D case. We will explain how the interplay between the increasing and oscillating behavior of the coefficient will influence global existence of small data solutions. Copyright c 2011 John Wiley & Sons, Ltd.
Resumo:
New differential linear coherent scattering coefficient, mu(CS), data for four biological tissue types (fat pork, tendon chicken, adipose and fibroglandular human breast tissues) covering a large momentum transfer interval (0.07 <= q <= 70.5 nm(-1)), resulted from combining WAXS and SAXS data, are presented in order to emphasize the need to update the default data-base by including the molecular interference and the large-scale arrangements effect. The results showed that the differential linear coherent scattering coefficient demonstrates influence of the large-scale arrangement, mainly due to collagen fibrils for tendon chicken and fibroglandular breast samples, and triacylglycerides for fat pork and adipose breast samples at low momentum transfer region. While, at high momentum transfer, the mu(CS) reflects effects of molecular interference related to water for tendon chicken and fibroglandular samples and, fatty acids for fat pork and adipose samples. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The refinement calculus provides a framework for the stepwise development of imperative programs from specifications. In this paper we study a refinement calculus for deriving logic programs. Dealing with logic programs rather than imperative programs has the dual advantages that, due to the expressive power of logic programs, the final program is closer to the original specification, and each refinement step can achieve more. Together these reduce the overall number of derivation steps. We present a logic programming language extended with specification constructs (including general predicates, assertions, and types and invariants) to form a wide-spectrum language. General predicates allow non-executable properties to be included in specifications. Assertions, types and invariants make assumptions about the intended inputs of a procedure explicit, and can be used during refinement to optimize the constructed logic program. We provide a semantics for the extended logic programming language and derive a set of refinement laws. Finally we apply these to an example derivation.
Resumo:
We identify a test of quantum mechanics versus macroscopic local realism in the form of stochastic electrodynamics. The test uses the steady-state triple quadrature correlations of a parametric oscillator below threshold.
Resumo:
The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.
Resumo:
A number of theoretical and experimental investigations have been made into the nature of purlin-sheeting systems over the past 30 years. These systems commonly consist of cold-formed zed or channel section purlins, connected to corrugated sheeting. They have proven difficult to model due to the complexity of both the purlin deformation and the restraint provided to the purlin by the sheeting. Part 1 of this paper presented a non-linear elasto plastic finite element model which, by incorporating both the purlin and the sheeting in the analysis, allowed the interaction between the two components of the system to be modelled. This paper presents a simplified version of the first model which has considerably decreased requirements in terms of computer memory, running time and data preparation. The Simplified Model includes only the purlin but allows for the sheeting's shear and rotational restraints by modelling these effects as springs located at the purlin-sheeting connections. Two accompanying programs determine the stiffness of these springs numerically. As in the Full Model, the Simplified Model is able to account for the cross-sectional distortion of the purlin, the shear and rotational restraining effects of the sheeting, and failure of the purlin by local buckling or yielding. The model requires no experimental or empirical input and its validity is shown by its goon con elation with experimental results. (C) 1997 Elsevier Science Ltd.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
This paper considers a stochastic frontier production function which has additive, heteroscedastic error structure. The model allows for negative or positive marginal production risks of inputs, as originally proposed by Just and Pope (1978). The technical efficiencies of individual firms in the sample are a function of the levels of the input variables in the stochastic frontier, in addition to the technical inefficiency effects. These are two features of the model which are not exhibited by the commonly used stochastic frontiers with multiplicative error structures, An empirical application is presented using cross-sectional data on Ethiopian peasant farmers. The null hypothesis of no technical inefficiencies of production among these farmers is accepted. Further, the flexible risk models do not fit the data on peasant farmers as well as the traditional stochastic frontier model with multiplicative error structure.
Resumo:
Linear IgA disease is a rare autoimmune subepidermal bullous disorder characterized by linear IgA deposits at the epidermal basement membrane zone. According to the literature, in patients who have linear IgA disease and become pregnant, the disease tends to improve. We report a case of linear IgA disease induced by pregnancy, successfully treated with dapsone and prednisone with no adverse effects observed in the patient and her newborns.
Resumo:
Leptospirosis is a zoonotic infection associated with severe diseases such as leptospirosis pulmonary haemorrhage syndrome (LPHS). The cause of pulmonary haemorrhage is unclear. Understanding which mechanisms and processes are involved in LPHS will be important in treatment regimens under development for this life-threatening syndrome. In the present study, we evaluated 30 lung specimens from LPHS patients and seven controls using histology and immunohistochemistry (detection of IgM, IgG, IgA and C3) in order to describe the pathological features associated with this syndrome. Immunoglobulin deposits were detected on the alveolar surface in 18/30 LPHS patients. Three staining patterns were observed for the immunoglobulins and C3 in the lung tissues of LPHS patients: AS, delicate linear staining adjacent to the alveolar surface, which was indicative of a membrane covering the luminal surface of type I and II pneumocyte cells; S, heterogeneous staining which was sporadically distributed along the alveolar septum; and IA, weak, focal intra-alveolar granular staining. Human LPHS is associated with individual and unique histological patterns that differ from those of other causes of pulmonary haemorrhage. In the present study, it was found that the linear deposition of immunoglobulins (IgA, IgG and IgM) and complement on the alveolar surface may play a role in the pathogenesis of pulmonary haemorrhage in human leptospirosis.