994 resultados para Maximum loading points
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
Didanosine-loaded chitosan microspheres were developed applying a surface-response methodology and using a modified Maximum Likelihood Classification. The operational conditions were optimized with the aim of maintaining the active form of didanosine (ddI), which is sensitive to acid pH, and to develop a modified and mucoadhesive formulation. The loading of the drug within the chitosan microspheres was carried out by ionotropic gelation technique with sodium tripolyphosphate (TPP) as cross-linking agent and magnesium hydroxide (Mg(OH)2) to assure the stability of ddI. The optimization conditions were set using a surface-response methodology and applying the Maximum Likelihood Classification, where the initial chitosan concentration, TPP and ddI concentration were set as the independent variables. The maximum ddI-loaded in microspheres (i.e. 1433mg of ddI/g chitosan), was obtained with 2% (w/v) chitosan and 10% TPP. The microspheres depicted an average diameter of 11.42μm and ddI was gradually released during 2h in simulated enteric fluid.
Resumo:
Purpose: The objective of this study was to evaluate the stress on the cortical bone around single body dental implants supporting mandibular complete fixed denture with rigid (Neopronto System-Neodent) or semirigid splinting system (Barra Distal System-Neodent). Methods and Materials: Stress levels on several system components were analyzed through finite element analysis. Focusing on stress concentration at cortical bone around single body dental implants supporting mandibular complete fixed dentures with rigid ( Neopronto System-Neodent) or semirigid splinting system ( Barra Distal System-Neodent), after axial and oblique occlusal loading simulation, applied in the last cantilever element. Results: The results showed that semirigid implant splinting generated lower von Mises stress in the cortical bone under axial loading. Rigid implant splinting generated higher von Mises stress in the cortical bone under oblique loading. Conclusion: It was concluded that the use of a semirigid system for rehabilitation of edentulous mandibles by means of immediate implant-supported fixed complete denture is recommended, because it reduces stress concentration in the cortical bone. As a consequence, bone level is better preserved, and implant survival is improved. Nevertheless, for both situations the cortical bone integrity was protected, because the maximum stress level findings were lower than those pointed in the literature as being harmful. The maximum stress limit for cortical bone (167 MPa) represents the threshold between plastic and elastic state for a given material. Because any force is applied to an object, and there is no deformation, we can conclude that the elastic threshold was not surpassed, keeping its structural integrity. If the force is higher than the plastic threshold, the object will suffer permanent deformation. In cortical bone, this represents the beginning of bone resorption and/or remodeling processes, which, according to our simulated loading, would not occur. ( Implant Dent 2010; 19:39-49)
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
DESIGN: A randomized controlled trial.OB JECTIVE: To investigate the immediate effects on pressure pain thresholds over latent trigger points (TrPs) in the masseter and temporalis muscles and active mouth opening following atlanto-occipital joint thrust manipulation or a soft tissue manual intervention targeted to the suboccipital muscles. BACKGROUND : Previous studies have described hypoalgesic effects of neck manipulative interventions over TrPs in the cervical musculature. There is a lack of studies analyzing these mechanisms over TrPs of muscles innervated by the trigeminal nerve. METHODS: One hundred twenty-two volunteers, 31 men and 91 women, between the ages of 18 and 30 years, with latent TrPs in the masseter muscle, were randomly divided into 3 groups: a manipulative group who received an atlanto-occipital joint thrust, a soft tissue group who received an inhibition technique over the suboccipital muscles, and a control group who did not receive an intervention. Pressure pain thresholds over latent TrPs in the masseter and temporalis muscles, and active mouth opening were assessed pretreatment and 2 minutes posttreatment by a blinded assessor. Mixed-model analyses of variance (ANOVA) were used to examine the effects of interventions on each outcome, with group as the between-subjects variable and time as the within-subjects variable. The primary analysis was the group-by-time interaction. RESULTS: The 2-by-3 mixed-model ANOVA revealed a significant group-by-time interaction for changes in pressure pain thresholds over masseter (P<.01) and temporalis (P =.003) muscle latent TrPs and also for active mouth opening (P<.001) in favor of the manipulative and soft tissue groups. Between-group effect sizes were small. CONCLUSIONS: The application of an atlanto-occipital thrust manipulation or soft tissue technique targeted to the suboccipital muscles led to an immediate increase in pressure pain thresholds over latent TrPs in the masseter and temporalis muscles and an increase in maximum active mouth opening. Nevertheless, the effects of both interventions were small and future studies are required to elucidate the clinical relevance of these changes. LEVEL OF EVIDENCE : Therapy, level 1b. J Orthop Sports Phys Ther 2010;40(5):310-317. doi:10.2519/jospt.2010.3257. KEYWORDSDS: cervical manipulation, muscle trigger points, neck, TMJ, upper cervical.
Resumo:
In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriated for the load capacity installed. At the present time there are no standard specimen’s geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriated for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress is uniform and maximum with two limit phase shift loading conditions. Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests. © 2014, Gruppo Italiano Frattura. All rights reserved.
Resumo:
In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriate for the load capacity installed. At the present time there are no standard specimen's geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriate for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress distributions on the loading directions are uniform and maximum with two limit phase shift loading conditions (delta = 0 degrees and (delta = 180 degrees). Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests
Resumo:
1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.
Resumo:
BACKGROUND: The main objective of this study was to explore the effect of acute creatine (Cr) ingestion on the secretion of human growth hormone (GH). METHODS: In a comparative cross-sectional study, 6 healthy male subjects ingested in resting conditions a single dose of 20 g creatine (Cr-test) vs a control (c-test). During 6 hours the Cr, creatinine and GH concentrations in blood serum were measured after Cr ingestion (Cr-test). RESULTS: During the Cr-test, all subjects showed a significant stimulation of GH (p<0.05), but with a large interindividual variability in the GH response: the difference between Cr-test and c-test averaged 83% (SD 45%). For the majority of subjects the maximum GH concentration occurred between 2 hrs and 6 hrs after the acute Cr ingestion. CONCLUSIONS: In resting conditions and at high dosages Cr enhances GH secretion, mimicking the response of strong exercise which also stimulates GH secretion. Acute body weight gain and strength increase observed after Cr supplementation should consider the indirect anabolic property of Cr.
Resumo:
Purpose: To load embolization particles (DC-Beads, Biocompatibles, UK) with an anti-angiogenic agent (sunitinib) and to characterize the in vitro properties of the Beads-drug association.Materials: DC Beads of 100-300µm were loaded using a specially designed 10mg/ml sunitinib solution. Loading profile was studied by spectrophotometry of the supernatant solution at 430nm at different time points. Release experiment was performed using the USP method 4 (flow-through cell). Spectrophotometric determination at 430nm was used to measure drug concentration in the eluting solution.Results: We were able to load >98% of the drug in the DC-Beads in 2 hours. The maximum concentration was 20mg sunitinib/ml DC Beads. Loaded Beads gradually released 59% of the loaded drug in the eluting solution, by an ionic exchange mechanism,over 6 hours.Conclusions: DC Beads could be loaded with the multi tyrosine kinase inhibitor sunitinib using a specially designed solution. High drug payload can be achieved. The loaded DC Beads released the drug in an ionic eluting solution with an interesting release profile.
Resumo:
It is a common macroscopic observation that knotted ropes or fishing lines under tension easily break at the knot. However, a more precise localization of the breakage point in knotted macroscopic strings is a difficult task. In the present work, the tightening of knots was numerically simulated, a comparison of strength of different knots was experimentally performed and a high velocity camera was used to precisely localize the site where knotted macroscopic strings break. In the case of knotted spaghetti, the breakage occurs at the position with high curvature at the entry to the knot. This localization results from joint contributions of loading, bending and friction forces into the complex process of knot breakage. The present simulations and experiments are in agreement with recent molecular dynamics simulations of a knotted polymer chain and with experiments performed on actin and DNA filaments. The strength of the knotted string is greatly reduced (down to 50%) by the presence of a knot, therefore reducing the resistance to tension of all materials containing chains of any sort. The present work with macroscopic strings revels some important aspects, which are not accessible by experiments with microscopic chains.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.