974 resultados para Maximum loading point
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
Cover title.
Resumo:
Bibliography: p. 30.
Resumo:
Didanosine-loaded chitosan microspheres were developed applying a surface-response methodology and using a modified Maximum Likelihood Classification. The operational conditions were optimized with the aim of maintaining the active form of didanosine (ddI), which is sensitive to acid pH, and to develop a modified and mucoadhesive formulation. The loading of the drug within the chitosan microspheres was carried out by ionotropic gelation technique with sodium tripolyphosphate (TPP) as cross-linking agent and magnesium hydroxide (Mg(OH)2) to assure the stability of ddI. The optimization conditions were set using a surface-response methodology and applying the Maximum Likelihood Classification, where the initial chitosan concentration, TPP and ddI concentration were set as the independent variables. The maximum ddI-loaded in microspheres (i.e. 1433mg of ddI/g chitosan), was obtained with 2% (w/v) chitosan and 10% TPP. The microspheres depicted an average diameter of 11.42μm and ddI was gradually released during 2h in simulated enteric fluid.
Resumo:
The purpose of this study was to determine if performing isometric 3-point kneeling exercises on a Swiss ball influenced the isometric force output and EMG activities of the shoulder muscles when compared with performing the same exercises on a stable base of support. Twenty healthy adults performed the isometric 3-point kneeling exercises with the hand placed either on a stable surface or on a Swiss ball. Surface EMG was recorded from the posterior deltoid, pectoralis major, biceps brachii, triceps brachii, upper trapezius, and serratus anterior muscles using surface differential electrodes. All EMG data were reported as percentages of the average root mean square (RMS) values obtained in maximum voluntary contractions for each muscle studied. The highest load value was obtained during exercise on a stable surface. A significant increase was observed in the activation of glenohumeral muscles during exercises on a Swiss ball. However, there were no differences in EMG activities of the scapulothoracic muscles. These results suggest that exercises performed on unstable surfaces may provide muscular activity levels similar to those performed on stable surfaces, without the need to apply greater external loads to the musculoskeletal system. Therefore, exercises on unstable surfaces may be useful during the process of tissue regeneration.
Resumo:
The concentration of hydrogen peroxide is an important parameter in the azo dyes decoloration process through the utilization of advanced oxidizing processes, particularly by oxidizing via UV/H2O2. It is pointed out that, from a specific concentration, the hydrogen peroxide works as a hydroxyl radical self-consumer and thus a decrease of the system`s oxidizing power happens. The determination of the process critical point (maximum amount of hydrogen peroxide to be added) was performed through a ""thorough mapping"" or discretization of the target region, founded on the maximization of an objective function objective (constant of reaction kinetics of pseudo-first order). The discretization of the operational region occurred through a feedforward backpropagation neural model. The neural model obtained presented remarkable coefficient of correlation between real and predicted values for the absorbance variable, above 0.98. In the present work, the neural model had, as phenomenological basis the Acid Brown 75 dye decoloration process. The hydrogen peroxide addition critical point, represented by a value of mass relation (F) between the hydrogen peroxide mass and the dye mass, was established in the interval 50 < F < 60. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Purpose: The objective of this study was to evaluate the stress on the cortical bone around single body dental implants supporting mandibular complete fixed denture with rigid (Neopronto System-Neodent) or semirigid splinting system (Barra Distal System-Neodent). Methods and Materials: Stress levels on several system components were analyzed through finite element analysis. Focusing on stress concentration at cortical bone around single body dental implants supporting mandibular complete fixed dentures with rigid ( Neopronto System-Neodent) or semirigid splinting system ( Barra Distal System-Neodent), after axial and oblique occlusal loading simulation, applied in the last cantilever element. Results: The results showed that semirigid implant splinting generated lower von Mises stress in the cortical bone under axial loading. Rigid implant splinting generated higher von Mises stress in the cortical bone under oblique loading. Conclusion: It was concluded that the use of a semirigid system for rehabilitation of edentulous mandibles by means of immediate implant-supported fixed complete denture is recommended, because it reduces stress concentration in the cortical bone. As a consequence, bone level is better preserved, and implant survival is improved. Nevertheless, for both situations the cortical bone integrity was protected, because the maximum stress level findings were lower than those pointed in the literature as being harmful. The maximum stress limit for cortical bone (167 MPa) represents the threshold between plastic and elastic state for a given material. Because any force is applied to an object, and there is no deformation, we can conclude that the elastic threshold was not surpassed, keeping its structural integrity. If the force is higher than the plastic threshold, the object will suffer permanent deformation. In cortical bone, this represents the beginning of bone resorption and/or remodeling processes, which, according to our simulated loading, would not occur. ( Implant Dent 2010; 19:39-49)
Resumo:
The Lewis dwarf (DW) rat was used as a model to test the hypothesis that growth hormone (GH) is permissive for new bone formation induced by mechanical loading in vivo. Adult female Lewis DW rats aged 6.2 +/- 0.1 months (187 +/- 18 g) were allocated to four vehicle groups (DW), four GH treatment groups at 32.5 mug/100 g body mass (DWGH1), and four GH treatment groups at 65 mug/100 g (DWGH2). Saline vehicle or GH was injected intraperitoneally (ip) at 6:30 p.m. and 6:30 a.m. before mechanical loading of tibias at 7:30 a.m. A single period of 300 cycles of four-point bending was applied to right tibias at 2.0 Hz, and magnitudes of 24, 29, 38, or 48N were applied. Separate strain gauge analyses in 5 DW rats validated the selection of loading magnitudes. After loading, double-label histomorphometry was used to assess bone formation at the periosteal surface (Ps.S) and endocortical surface (Ec.S) of tibias. Comparing left (unloaded) tibias among groups, GH treatment had no effect on bone formation. Bone formation in tibias in DW rats was insensitive to mechanical loading. At the Ec.S, mechanically induced lamellar bone formation increased in the DWGH2 group loaded at 48N (p < 0.05), and no significant increases in bone formation were observed among other groups. The percentage of tibias expressing woven bone formation (Wo.B) at the Ps.S was significantly greater in the DWGH groups compared with controls (p < 0.05). We concluded that GH influences loading-related bone formation in a permissive manner and modulates the responsiveness of bone tissue to mechanical stimuli by changing thresholds for bone formation.
Resumo:
In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriated for the load capacity installed. At the present time there are no standard specimen’s geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriated for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress is uniform and maximum with two limit phase shift loading conditions. Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests. © 2014, Gruppo Italiano Frattura. All rights reserved.
Resumo:
O transporte de cargas é uma tarefa comum para crianças, adolescentes e adultos, pela necessidade de transferência diária de objetos pessoais, livros e artigos de papelaria para os locais de trabalho ou escolas. Diversos autores apontam que o peso carregado durante transporte de material é o principal responsável pelo aparecimento de dor lombar. Deste modo é importante o constante estudo da temática para a definição recomendações e limites. O presente estudo teve como principais objetivos a caraterização da problemática associada à utilização de mochilas e a determinação do Peso Máximo Aceitável (PMA) e do Índice de Esforço Percebido (IEP) para a tarefa de transporte de mochilas, através da abordagem psicofísica. O estudo foi desenvolvido com estudantes do 7º, 8º e 9º ano de escolaridade e, foi dividido em duas fases. Na 1ª fase foram aplicados questionários para a análise da problemática associada à utilização de diferentes tipos de mochilas escolares. Nesta fase, foram incluídos aspetos associados à identificação do tipo de mochila mais utilizada, as rotinas e hábitos dos estudantes e as características da mochila utilizada. Verificou-se que os estudantes utilizam, maioritariamente, a mochila de duas alças para transporte de material escolar. Posteriormente foram efetuadas medições de peso da mochila, altura e peso aos 131 estudantes que constituíram a amostra da 1º fase. O principal objetivo deste ponto foi identificar o tipo de mochila habitualmente utilizada pelos estudantes assim como, o peso transportado nas mochilas. Na 2ª fase foi efetuado um estudo para a determinação do PMA e do IEP, através da abordagem psicofísica, para a tarefa de transporte de mochila, considerando-se uma amostra constituída por 10 estudantes. Para este estudo, apenas foi considerada a mochila mais frequentemente utilizada, identificada na 1º fase. A tarefa consistiu no transporte da mochila nos dois ombros e com as alças devidamente ajustadas ao corpo, num percurso pré-definido, de acordo com o procedimento experimental. Os resultados indicaram que nem todos os estudantes transportam mochilas com pesos dentro das recomendações da Organização Mundial de Saúde. O PMA determinado pelos estudantes foi de 6.8 kg para a mochila de duas alças e a região dos ombros foi identificada durante todo o estudo como sendo a que apresentava maior intensidade de dor durante o transporte da mochila.
Resumo:
In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriate for the load capacity installed. At the present time there are no standard specimen's geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriate for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress distributions on the loading directions are uniform and maximum with two limit phase shift loading conditions (delta = 0 degrees and (delta = 180 degrees). Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests
Resumo:
BACKGROUND: The main objective of this study was to explore the effect of acute creatine (Cr) ingestion on the secretion of human growth hormone (GH). METHODS: In a comparative cross-sectional study, 6 healthy male subjects ingested in resting conditions a single dose of 20 g creatine (Cr-test) vs a control (c-test). During 6 hours the Cr, creatinine and GH concentrations in blood serum were measured after Cr ingestion (Cr-test). RESULTS: During the Cr-test, all subjects showed a significant stimulation of GH (p<0.05), but with a large interindividual variability in the GH response: the difference between Cr-test and c-test averaged 83% (SD 45%). For the majority of subjects the maximum GH concentration occurred between 2 hrs and 6 hrs after the acute Cr ingestion. CONCLUSIONS: In resting conditions and at high dosages Cr enhances GH secretion, mimicking the response of strong exercise which also stimulates GH secretion. Acute body weight gain and strength increase observed after Cr supplementation should consider the indirect anabolic property of Cr.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
PURPOSE: To present in vitro loading and release characteristics of idarubicin with ONCOZENE (CeloNova BioSciences, Inc, San Antonio, Texas) drug-eluting embolic (DEE) agents and in vivo pharmacokinetics data after transarterial chemoembolization with idarubicin-loaded ONCOZENE DEE agents in patients with hepatocellular carcinoma. MATERIALS AND METHODS: Loading efficacy of idarubicin with ONCOZENE DEE agents 100 µm and DC Bead (Biocompatibles UK Ltd, Farnham, United Kingdom) DEE agents 100-300 µm was monitored at 10, 20, and 30 minutes loading time by high-pressure liquid chromatography. A T-apparatus was used to monitor the release of idarubicin from the two types of DEE agents over 12 hours. Clinical and 24-hour pharmacokinetics data were recorded after transarterial chemoembolization with idarubicin-loaded ONCOZENE DEE agents in four patients with unresectable hepatocellular carcinoma. RESULTS: Idarubicin loading in ONCOZENE DEE agents was > 99% at 10 minutes. Time to reach 75% of the release plateau level was 37 minutes ± 6 for DC Bead DEE agents and 170 minutes ± 19 for ONCOZENE DEE agents both loaded with idarubicin 10 mg/mL. After transarterial chemoembolization with idarubicin-loaded ONCOZENE DEE agents, three partial responses and one complete response were observed with only two asymptomatic grade 3 biologic adverse events. Median time to maximum concentration for idarubicin in patients was 10 minutes, and mean maximum concentration was 4.9 µg/L ± 1.7. Mean area under the concentration-time curve from 0-24 hours was equal to 29.5 µg.h/L ± 20.5. CONCLUSIONS: ONCOZENE DEE agents show promising results with very fast loading ability, a favorable in vivo pharmacokinetics profile with a sustained release of idarubicin during the first 24 hours, and encouraging safety and responses. Histopathologic and clinical studies are needed to evaluate idarubicin release around the DEE agents in tumor tissue and to confirm safety and efficacy.