108 resultados para APPLIED PROBABILITY
Resumo:
According to some estimates, world's population growth is expected about 50% over the next 50 years. Thus, one of the greatest challenges faced by Engineering is to find effective options to food storage and conservation. Some researchers have investigated how to design durable buildings for storing and conserving food. Nowadays, developing concrete with mechanical resistance for room temperatures is a parameter that can be achieved easily. On the other hand, associating it to low temperature of approximately 35 °C negative requires less empiricism, being necessary a suitable dosage method and a careful selection of the material constituents. This ongoing study involves these parameters. The presented concrete was analyzed through non-destructive tests that examines the material properties periodically and verifies its physical integrity. Concrete with and without incorporated air were studied. The results demonstrated that both are resistant to freezing.
Resumo:
The structural engineering community in Brazil faces new challenges with the recent occurrence of high intensity tornados. Satellite surveillance data shows that the area covering the south-east of Brazil, Uruguay and some of Argentina is one of the world most tornado-prone areas, second only to the infamous tornado alley in central United States. The design of structures subject to tornado winds is a typical example of decision making in the presence of uncertainty. Structural design involves finding a good balance between the competing goals of safety and economy. This paper presents a methodology to find the optimum balance between these goals in the presence of uncertainty. In this paper, reliability-based risk optimization is used to find the optimal safety coefficient that minimizes the total expected cost of a steel frame communications tower, subject to extreme storm and tornado wind loads. The technique is not new, but it is applied to a practical problem of increasing interest to Brazilian structural engineers. The problem is formulated in the partial safety factor format used in current design codes, with all additional partial factor introduced to serve as optimization variable. The expected cost of failure (or risk) is defined as the product of a. limit state exceedance probability by a limit state exceedance cost. These costs include costs of repairing, rebuilding, and paying compensation for injury and loss of life. The total expected failure cost is the sum of individual expected costs over all failure modes. The steel frame communications, tower subject of this study has become very common in Brazil due to increasing mobile phone coverage. The study shows that optimum reliability is strongly dependent on the cost (or consequences) of failure. Since failure consequences depend oil actual tower location, it turn,,; out that different optimum designs should be used in different locations. Failure consequences are also different for the different parties involved in the design, construction and operation of the tower. Hence, it is important that risk is well understood by the parties involved, so that proper contracts call be made. The investigation shows that when non-structural terms dominate design costs (e.g, in residential or office buildings) it is not too costly to over-design; this observation is in agreement with the observed practice for non-optimized structural systems. In this situation, is much easier to loose money by under-design. When by under-design. When structural material cost is a significant part of design cost (e.g. concrete dam or bridge), one is likely to lose significantmoney by over-design. In this situation, a cost-risk-benefit optimization analysis is highly recommended. Finally, the study also shows that under time-varying loads like tornados, the optimum reliability is strongly dependent on the selected design life.
Resumo:
Multicomponent white cast iron is a new alloy that belongs to system Fe-C-Cr-W-Mo-V, and because of its excellent wear resistance it is used in the manufacture of hot rolling mills rolls. To date, this alloy has been processed by casting, powder metallurgy, and spray forming. The high-velocity oxyfuel process is now also considered for the manufacture of components with this alloy. The effects of substrate, preheating temperature, and coating thickness on bond strength of coatings have been determined. Substrates of AISI 1020 steel and of cast iron with preheating of 150 A degrees C and at room temperature were used to apply coatings with 200 and 400 mu m nominal thickness. The bond strength of coatings was measured with the pull-off test method and the failure mode by scanning electron microscopic analysis. Coatings with thickness of 200 mu m and applied on substrates of AISI 1020 steel with preheating presented bond strength of 87 +/- A 4 MPa.
Resumo:
Shot peening is a cold-working mechanical process in which a shot stream is propelled against a component surface. Its purpose is to introduce compressive residual stresses on component surfaces for increasing the fatigue resistance. This process is widely applied in springs due to the cyclical loads requirements. This paper presents a numerical modelling of shot peening process using the finite element method. The results are compared with experimental measurements of the residual stresses, obtained by the X-rays diffraction technique, in leaf springs submitted to this process. Furthermore, the results are compared with empirical and numerical correlations developed by other authors.
Resumo:
This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM) presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named ""power deflation"", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.
Resumo:
The implementation of confidential contracts between a container liner carrier and its customers, because of the Ocean Shipping Reform Act (OSRA) 1998, demands a revision in the methodology applied in the carrier's planning of marketing and sales. The marketing and sales planning process should be more scientific and with a better use of operational research tools considering the selection of the customers under contracts, the duration of the contracts, the freight, and the container imbalances of these contracts are basic factors for the carrier's yield. This work aims to develop a decision support system based on a linear programming model to generate the business plan for a container liner carrier, maximizing the contribution margin of its freight.
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
This work is part of a research under construction since 2000, in which the main objective is to measure small dynamic displacements by using L1 GPS receivers. A very sensible way to detect millimetric periodic displacements is based on the Phase Residual Method (PRM). This method is based on the frequency domain analysis of the phase residuals resulted from the L1 double difference static data processing of two satellites in almost orthogonal elevation angle. In this article, it is proposed to obtain the phase residuals directly from the raw phase observable collected in a short baseline during a limited time span, in lieu of obtaining the residual data file from regular GPS processing programs which not always allow the choice of the aimed satellites. In order to improve the ability to detect millimetric oscillations, two filtering techniques are introduced. One is auto-correlation which reduces the phase noise with random time behavior. The other is the running mean to separate low frequency from the high frequency phase sources. Two trials have been carried out to verify the proposed method and filtering techniques. One simulates a 2.5 millimeter vertical antenna displacement and the second uses the GPS data collected during a bridge load test. The results have shown a good consistency to detect millimetric oscillations.
Resumo:
The objective of this manuscript is to discuss the existing barriers for the dissemination of medical guidelines, and to present strategies that facilitate the adaptation of the recommendations into clinical practice. The literature shows that it usually takes several years until new scientific evidence is adopted in current practice, even when there is obvious impact in patients' morbidity and mortality. There are some examples where more than thirty years have elapsed since the first case reports about the use of a effective therapy were published until its utilization became routine. That is the case of fibrinolysis for the treatment of acute myocardial infarction. Some of the main barriers for the implementation of new recommendations are: the lack of knowledge of a new guideline, personal resistance to changes, uncertainty about the efficacy of the proposed recommendation, fear of potential side-effects, difficulties in remembering the recommendations, inexistence of institutional policies reinforcing the recommendation and even economical restrains. In order to overcome these barriers a strategy that involves a program with multiple tools is always the best. That must include the implementation of easy-to-use algorithms, continuous medical education materials and lectures, electronic or paper alerts, tools to facilitate evaluation and prescription, and periodic audits to show results to the practitioners involved in the process. It is also fundamental that the medical societies involved with the specific medical issue support the program for its scientific and ethical soundness. The creation of multidisciplinary committees in each institution and the inclusion of opinion leaders that have pro-active and lasting attitudes are the key-points for the program's success. In this manuscript we use as an example the implementation of a guideline for venous thromboembolism prophylaxis, but the concepts described here can be easily applied to any other guideline. Therefore, these concepts could be very useful for institutions and services that aim at quality improvement of patient care. Changes in current medical practice recommended by guidelines may take some time. However, if there is a broader participation of opinion leaders and the use of several tools listed here, they surely have a greater probability of reaching the main objectives: improvement in provided medical care and patient safety.
Resumo:
Background: Cerebral palsy (CP) patients have motor limitations that can affect functionality and abilities for activities of daily living (ADL). Health related quality of life and health status instruments validated to be applied to these patients do not directly approach the concepts of functionality or ADL. The Child Health Assessment Questionnaire (CHAQ) seems to be a good instrument to approach this dimension, but it was never used for CP patients. The purpose of the study was to verify the psychometric properties of CHAQ applied to children and adolescents with CP. Methods: Parents or guardians of children and adolescents with CP, aged 5 to 18 years, answered the CHAQ. A healthy group of 314 children and adolescents was recruited during the validation of the CHAQ Brazilian-version. Data quality, reliability and validity were studied. The motor function was evaluated by the Gross Motor Function Measure (GMFM). Results: Ninety-six parents/guardians answered the questionnaire. The age of the patients ranged from 5 to 17.9 years (average: 9.3). The rate of missing data was low(< 9.3%). The floor effect was observed in two domains, being higher only in the visual analogue scales (<= 35.5%). The ceiling effect was significant in all domains and particularly high in patients with quadriplegia (81.8 to 90.9%) and extrapyramidal (45.4 to 91.0%). The Cronbach alpha coefficient ranged from 0.85 to 0.95. The validity was appropriate: for the discriminant validity the correlation of the disability index with the visual analogue scales was not significant; for the convergent validity CHAQ disability index had a strong correlation with the GMFM (0.77); for the divergent validity there was no correlation between GMFM and the pain and overall evaluation scales; for the criterion validity GMFM as well as CHAQ detected differences in the scores among the clinical type of CP (p < 0.01); for the construct validity, the patients' disability index score (mean: 2.16; SD: 0.72) was higher than the healthy group ( mean: 0.12; SD: 0.23)(p < 0.01). Conclusion: CHAQ reliability and validity were adequate to this population. However, further studies are necessary to verify the influence of the ceiling effect on the responsiveness of the instrument.
Resumo:
Objective: To determine if the magnitude of the force used to induce incisor tooth movement promotes distinct activation in cells in the central amygdala (CEA) and lateral hypothalamus (LH) of rats. Also, the effect of morphine on Fos immunoreactivity (Fos-IR) was investigated in these nuclei. Materials and Methods: Adult male rats were anesthetized and divided into six groups: only anesthetized (control), without orthodontic appliance (OA), OA but without force, OA activated with 30g or 70g, OA with 70g in animals pretreated with morphine (2 mg/kg, intraperitoneal). Three hours after the onset of the experiment the rats were reanesthetized and perfused with 4% paraformaldehyde. The brains were removed and fixed, and sections containing CEA and LH were processed for Fos protein immunohistochemistry. Results: The results show that in the control group, the intramuscular injection of a ketamine/xylazine mixture did not induce Fos-IR cells in the CEA or in the LH. Again, the without force group showed a little Fos-IR. However, in the 70g group the Fos-IR was the biggest observed (P < .05, Tukey) in the CEA and LH compared with the other groups. In the 30g group, the Fos-IR did not differ from the control group, the without OA group, and the without force group. Furthermore, pretreatment with morphine in the 70g group reduced Fos-IR in these regions. Conclusions: Tooth movement promotes Fos-IR in the CEA and LH according to the magnitude of the force applied. (Angle Orthod. 2010;80:111-115.)
Resumo:
Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.
Resumo:
We analyze the irreversibility and the entropy production in nonequilibrium interacting particle systems described by a Fokker-Planck equation by the use of a suitable master equation representation. The irreversible character is provided either by nonconservative forces or by the contact with heat baths at distinct temperatures. The expression for the entropy production is deduced from a general definition, which is related to the probability of a trajectory in phase space and its time reversal, that makes no reference a priori to the dissipated power. Our formalism is applied to calculate the heat conductance in a simple system consisting of two Brownian particles each one in contact to a heat reservoir. We show also the connection between the definition of entropy production rate and the Jarzynski equality.
Resumo:
The structure of probability currents is studied for the dynamical network after consecutive contraction on two-state, nonequilibrium lattice systems. This procedure allows us to investigate the transition rates between configurations on small clusters and highlights some relevant effects of lattice symmetries on the elementary transitions that are responsible for entropy production. A method is suggested to estimate the entropy production for different levels of approximations (cluster sizes) as demonstrated in the two-dimensional contact process with mutation.
Resumo:
We show a function that fits well the probability density of return times between two consecutive visits of a chaotic trajectory to finite size regions in phase space. It deviates from the exponential statistics by a small power-law term, a term that represents the deterministic manifestation of the dynamics. We also show how one can quickly and easily estimate the Kolmogorov-Sinai entropy and the short-term correlation function by realizing observations of high probable returns. Our analyses are performed numerically in the Henon map and experimentally in a Chua's circuit. Finally, we discuss how our approach can be used to treat the data coming from experimental complex systems and for technological applications. (C) 2009 American Institute of Physics. [doi: 10.1063/1.3263943]