936 resultados para special linear system
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background: There are no available statistical data about sudden cardiac death in Brazil. Therefore, this study has been conducted to evaluate the incidence of sudden cardiac death in our population and its implications. Methods: The research methodology was based on Thurstone's Law of Comparative Judgment, whose premise is that the more an A stimulus differs from a B stimulus, the greater will be the number of people who will perceive this difference. This technique allows an estimation of actual occurrences from subjective perceptions, when compared to official statistics. Data were collected through telephone interviews conducted with Primary and Secondary Care physicians of the Public Health Service in the Metropolitan Area of Sao Paulo (MASP). Results: In the period from October 19, 2009, to October 28, 2009, 196 interviews were conducted. The incidence of 21,270 cases of sudden cardiac death per year was estimated by linear regression analysis of the physicians responses and data from the Mortality Information System of the Brazilian Ministry of Health, with the following correlation and determination coefficients: r = 0.98 and r2= 0.95 (95% confidence interval 0.81.0, P < 0.05). The lack of waiting list for specialized care and socioadministrative problems were considered the main barriers to tertiary care access. Conclusions: The incidence of sudden cardiac death in the MASP is high, and it was estimated as being higher than all other causes of deaths; the extrapolation technique based on the physicians perceptions was validated; and the most important bureaucratic barriers to patient referral to tertiary care have been identified. (PACE 2012; 35:13261331)
Resumo:
Contents Among the modifications that occur during the neonatal period, pulmonary development is the most critical. The neonate's lungs must be able to perform adequate gas exchange, which was previously accomplished by the placenta. Neonatal respiratory distress syndrome is defined as insufficient surfactant production or pulmonary structural immaturity and is specifically relevant to preterm newborns. Prenatal maternal betamethasone treatment of bitches at 55days of gestation leads to structural changes in the neonatal lung parenchyma and consequently an improvement in the preterm neonatal respiratory condition, but not to an increase in pulmonary surfactant production. Parturition represents an important challenge to neonatal adaptation, as the uterine and abdominal contractions during labour provoke intermittent hypoxia. Immediately after birth, puppies present venous mixed acidosis (low blood pH and high dioxide carbon saturation) and low but satisfactory Apgar scores. Thus, the combination of physiological hypoxia during birth and the initial effort of filling the pulmonary alveoli with oxygen results in anaerobiosis. As a neonatal adaptation follow-up, the Apgar analysis indicates a tachypnoea response after 1h of life, which leads to a shift in the blood acidbase status to metabolic acidosis. One hour is sufficient for canine neonates to achieve an ideal Apgar score; however, a haemogasometric imbalance persists. Dystocia promotes a long-lasting bradycardia effect, slows down Apgar score progression and aggravates metabolic acidosis and stress. The latest data reinforce the need to accurately intervene during canine parturition and offer adequate medical treatment to puppies that underwent a pathological labour.
Resumo:
The innate and adaptive immune responses in neonates are usually functionally impaired when compared with their adult counterparts. The qualitative and quantitative differences in the neonatal immune response put them at risk for the development of bacterial and viral infections, resulting in increased mortality. Newborns often exhibit decreased production of Th1-polarizing cytokines and are biased toward Th2-type responses. Studies aimed at understanding the plasticity of the immune response in the neonatal and early infant periods or that seek to improve neonatal innate immune function with adjuvants or special formulations are crucial for preventing the infectious disease burden in this susceptible group. Considerable studies focused on identifying potential immunomodulatory therapies have been performed in murine models. This article highlights the strategies used in the emerging field of immunomodulation in bacterial and viral pathogens, focusing on preclinical studies carried out in animal models with particular emphasis on neonatal-specific immune deficits.
Resumo:
This work proposes a computational tool to assist power system engineers in the field tuning of power system stabilizers (PSSs) and Automatic Voltage Regulators (AVRs). The outcome of this tool is a range of gain values for theses controllers within which there is a theoretical guarantee of stability for the closed-loop system. This range is given as a set of limit values for the static gains of the controllers of interest, in such a way that the engineer responsible for the field tuning of PSSs and/or AVRs can be confident with respect to system stability when adjusting the corresponding static gains within this range. This feature of the proposed tool is highly desirable from a practical viewpoint, since the PSS and AVR commissioning stage always involve some readjustment of the controller gains to account for the differences between the nominal model and the actual behavior of the system. By capturing these differences as uncertainties in the model, this computational tool is able to guarantee stability for the whole uncertain model using an approach based on linear matrix inequalities. It is also important to remark that the tool proposed in this paper can also be applied to other types of parameters of either PSSs or Power Oscillation Dampers, as well as other types of controllers (such as speed governors, for example). To show its effectiveness, applications of the proposed tool to two benchmarks for small signal stability studies are presented at the end of this paper.
Resumo:
A systematic approach to model nonlinear systems using norm-bounded linear differential inclusions (NLDIs) is proposed in this paper. The resulting NLDI model is suitable for the application of linear control design techniques and, therefore, it is possible to fulfill certain specifications for the underlying nonlinear system, within an operating region of interest in the state-space, using a linear controller designed for this NLDI model. Hence, a procedure to design a dynamic output feedback controller for the NLDI model is also proposed in this paper. One of the main contributions of the proposed modeling and control approach is the use of the mean-value theorem to represent the nonlinear system by a linear parameter-varying model, which is then mapped into a polytopic linear differential inclusion (PLDI) within the region of interest. To avoid the combinatorial problem that is inherent of polytopic models for medium- and large-sized systems, the PLDI is transformed into an NLDI, and the whole process is carried out ensuring that all trajectories of the underlying nonlinear system are also trajectories of the resulting NLDI within the operating region of interest. Furthermore, it is also possible to choose a particular structure for the NLDI parameters to reduce the conservatism in the representation of the nonlinear system by the NLDI model, and this feature is also one important contribution of this paper. Once the NLDI representation of the nonlinear system is obtained, the paper proposes the application of a linear control design method to this representation. The design is based on quadratic Lyapunov functions and formulated as search problem over a set of bilinear matrix inequalities (BMIs), which is solved using a two-step separation procedure that maps the BMIs into a set of corresponding linear matrix inequalities. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.
Resumo:
The Josephson junction model is applied to the experimental implementation of classical bifurcation in a quadrupolar nuclear magnetic resonance system. There are two regimes, one linear and one nonlinear, which are implemented by the radio-frequency and the quadrupolar terms of the Hamiltonian of a spin system, respectively. These terms provide an explanation of the symmetry breaking due to bifurcation. Bifurcation depends on the coexistence of both regimes at the same time in different proportions. The experiment is performed on a lyotropic liquid crystal sample of an ordered ensemble of 133Cs nuclei with spin I = 7/2 at room temperature. Our experimental results confirm that bifurcation happens independently of the spin value and of the physical system. With this experimental spin scenario, we confirm that a quadrupolar nuclei system could be described analogously to a symmetric two-mode Bose-Einstein condensate.
Resumo:
Reinforced concrete columns might fail because of buckling of the longitudinal reinforcing bar when exposed to earthquake motions. Depending on the hoop stiffness and the length-over-diameter ratio, the instability can be local (in between two subsequent hoops) or global (the buckling length comprises several hoop spacings). To get insight into the topic, an extensive literary research of 19 existing models has been carried out including different approaches and assumptions which yield different results. Finite element fiberanalysis was carried out to study the local buckling behavior with varying length-over-diameter and initial imperfection-over-diameter ratios. The comparison of the analytical results with some experimental results shows good agreement before the post buckling behavior undergoes large deformation. Furthermore, different global buckling analysis cases were run considering the influence of different parameters; for certain hoop stiffnesses and length-over-diameter ratios local buckling was encountered. A parametric study yields an adimensional critical stress in function of a stiffness ratio characterized by the reinforcement configuration. Colonne in cemento armato possono collassare per via dell’instabilità dell’armatura longitudinale se sottoposte all’azione di un sisma. In funzione della rigidezza dei ferri trasversali e del rapporto lunghezza d’inflessione-diametro, l’instabilità può essere locale (fra due staffe adiacenti) o globale (la lunghezza d’instabilità comprende alcune staffe). Per introdurre alla materia, è proposta un’esauriente ricerca bibliografica di 19 modelli esistenti che include approcci e ipotesi differenti che portano a risultati distinti. Tramite un’analisi a fibre e elementi finiti si è studiata l’instabilità locale con vari rapporti lunghezza d’inflessione-diametro e imperfezione iniziale-diametro. Il confronto dei risultati analitici con quelli sperimentali mostra una buona coincidenza fino al raggiungimento di grandi spostamenti. Inoltre, il caso d’instabilità globale è stato simulato valutando l’influenza di vari parametri; per certe configurazioni di rigidezza delle staffe e lunghezza d’inflessione-diametro si hanno ottenuto casi di instabilità locale. Uno studio parametrico ha permesso di ottenere un carico critico adimensionale in funzione del rapporto di rigidezza dato dalle caratteristiche dell’armatura.
Resumo:
This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.
Resumo:
In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.
Resumo:
The obligate intracellular pathogen Chlamydia trachomatis is a gram negative bacterium which infects epithelial cells of the reproductive tract. C. trachomatis is the leading cause of bacterial sexually transmitted disease worldwide and a vaccine against this pathogen is highly needed. Many evidences suggest that both antigen specific-Th1 cells and antibodies may be important to provide protection against Chlamydia infection. In a previous study we have identified eight new Chlamydia antigens inducing CD4-Th1 and/or antibody responses that, when combined properly, can protect mice from Chlamydia infection. However, all selected recombinant antigens, upon immunization in mice, elicited antibodies not able to neutralize Chlamydia infectivity in vitro. With the aim to improve the quality of the immune response by inducing effective neutralizing antibodies, we used a novel delivery system based on the unique capacity of E. coli Outer Membrane Vesicles (OMV) to present membrane proteins in their natural composition and conformation. We have expressed Chlamydia antigens, previously identified as vaccine candidates, in the OMV system. Among all OMV preparations, the one expressing HtrA Chlamydia antigen (OMV-HtrA), showed to be the best in terms of yield and quantity of expressed protein, was used to produce mice immune sera to be tested in neutralization assay in vitro. We observed that OMV-HtrA elicited specific antibodies able to neutralize efficiently Chlamydia infection in vitro, indicating that the presentation of the antigens in their natural conformation is crucial to induce an effective immune response. This is one of the first examples in which antibodies directed against a new Chlamydia antigen, other than MOMP (the only so far known antigen inducing neutralizing antibodies), are able to block the Chlamydia infectivity in vitro. Finally, by performing an epitope mapping study, we investigated the specificity of the antibody response induced by the recombinant HtrA and by OMV-HtrA. In particular, we identified some linear epitopes exclusively recognized by antibodies raised with the OMV-HtrA system, detecting in this manner the antigen regions likely responsible of the neutralizing effect.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
For the safety assessments of nuclear waste repositories, the possible migration of the radiotoxic waste into environment must be considered. Since plutonium is the major contribution at the radiotoxicity of spent nuclear waste, it requires special care with respect to its mobilization into the groundwater. Plutonium has one of the most complicated chemistry of all elements. It can coexist in 4 oxidation states parallel in one solution. In this work is shown that in the presence of humic substances it is reduced to the Pu(III) and Pu(IV). This work has the focus on the interaction of Pu(III) with natural occurring compounds (humic substances and clay minerals bzw. Kaolinite), while Pu(IV) was studied in a parallel doctoral work by Banik (in preparation). As plutonium is expected under extreme low concentrations in the environment, very sensitive methods are needed to monitor its presence and for its speciation. Resonance ionization mass spectrometry (RIMS), was used for determining the concentration of Pu in environmental samples, with a detection limit of 106- 107 atoms. For the speciation of plutonium CE-ICP-MS was routinely used to monitor the behaviour of Pu in the presence of humic substances. In order to reduce the detection limits of the speciation methods, the coupling of CE to RIMS was proposed. The first steps have shown that this can be a powerful tool for studies of pu under environmental conditions. Further, the first steps in the coupling of two parallel working detectors (DAD and ICP_MS ) to CE was performed, for the enabling a precise study of the complexation constants of plutonium with humic substances. The redox stabilization of Pu(III) was studied and it was determined that NH2OHHCl can maintain Pu(III) in the reduced form up to pH 5.5 – 6. The complexation constants of Pu(III) with Aldrich humic acid (AHA) were determined at pH 3 and 4. the logß = 6.2 – 6.8 found for these experiments was comparable with the literature. The sorption of Pu(III) onto kaolinite was studied in batch experiments and it was determine dthat the pH edge was at pH ~ 5.5. The speciation of plutonium on the surface of kaolinite was studied by EXAFS/XANES. It was determined that the sorbed species was Pu(IV). The influence of AHA on the sorption of Pu(III) onto kaolinite was also investigated. It was determined that at pH < 5 the adsorption is enhanced by the presence of AHA (25 mg/L), while at pH > 6 the adsorption is strongly impaired (depending also on the adding sequence of the components), leading to a mobilization of plutonium in solution.