988 resultados para Variable amplitude loading
Resumo:
Introdução: A organização estrutural e funcional do sistema nervoso face à organização dos diferentes tipos de input, no âmbito da intervenção em fisioterapia, pode potenciar um controlo postural para a regulação do stiffness e com repercussões na marcha e no levantar. Objetivo: Descrever o comportamento do stiffness da tibiotársica no movimento de dorsiflexão, no membro inferior ispi e contralesional, em indivíduos após Acidente Vascular Encefálico, face a uma intervenção em fisioterapia baseada num processo de raciocínio clínico. Pretendeu-se também observar as modificações ocorridas no âmbito da atividade electromiográfica dos flexores plantares, gastrocnémio medial e solear, durante a marcha e o levantar. Métodos: Foi implementado um programa de reabilitação em 4 indivíduos com sequelas de AVE por um período de 3 meses, tendo sido avaliados no momento inicial e final (M0 e M1). O torque e a amplitude articular da tibiotársica foi monitorizada, através do dinamómetro isocinético, durante o movimento passivo de dorsiflexão, e o nível de atividade eletromiográfica registado, através de electomiografia de superfície, no solear e gastrocnémio medial. Foram estudadas as fases de aceitação de carga no STS (fase II) e na marcha (sub-fase II). Resultados: Em todos os indivíduos em estudo verificou-se que o stiffness apresentou uma modificação no sentido da diminuição em todas as amplitudes em M1. O nível de atividade eletromiográfica teve comportamentos diferentes nos vários indivíduos. Conclusão: O stiffness apontou para uma diminuição nos indivíduos em estudo entre M0 e M1. Foram registadas modificações no nível de atividade eletromiográfica sem que seja possível identificar uma tendência clara entre os dois momentos para esta variável.
Resumo:
Ce mémoire étudie l'algorithme d'amplification de l'amplitude et ses applications dans le domaine de test de propriété. On utilise l'amplification de l'amplitude pour proposer le plus efficace algorithme quantique à ce jour qui teste la linéarité de fonctions booléennes et on généralise notre nouvel algorithme pour tester si une fonction entre deux groupes abéliens finis est un homomorphisme. Le meilleur algorithme quantique connu qui teste la symétrie de fonctions booléennes est aussi amélioré et l'on utilise ce nouvel algorithme pour tester la quasi-symétrie de fonctions booléennes. Par la suite, on approfondit l'étude du nombre de requêtes à la boîte noire que fait l'algorithme d'amplification de l'amplitude pour amplitude initiale inconnue. Une description rigoureuse de la variable aléatoire représentant ce nombre est présentée, suivie du résultat précédemment connue de la borne supérieure sur l'espérance. Suivent de nouveaux résultats sur la variance de cette variable. Il est notamment montré que, dans le cas général, la variance est infinie, mais nous montrons aussi que, pour un choix approprié de paramètres, elle devient bornée supérieurement.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
We present simultaneous multicolor infrared and optical photometry of the black hole X-ray transient XTE J1118+480 during its short 2005 January outburst, supported by simultaneous X-ray observations. The variability is dominated by short timescales, ~10 s, although a weak superhump also appears to be present in the optical. The optical rapid variations, at least, are well correlated with those in X-rays. Infrared JHKs photometry, as in the previous outburst, exhibits especially large-amplitude variability. The spectral energy distribution (SED) of the variable infrared component can be fitted with a power law of slope α=-0.78+/-0.07, where F_ν~ν^α. There is no compelling evidence for evolution in the slope over five nights, during which time the source brightness decayed along almost the same track as seen in variations within the nights. We conclude that both short-term variability and longer timescale fading are dominated by a single component of constant spectral shape. We cannot fit the SED of the IR variability with a credible thermal component, either optically thick or thin. This IR SED is, however, approximately consistent with optically thin synchrotron emission from a jet. These observations therefore provide indirect evidence to support jet-dominated models for XTE J1118+480 and also provide a direct measurement of the slope of the optically thin emission, which is impossible, based on the average spectral energy distribution alone.
Resumo:
Recent literature has highlighted that the flexibility of walking barefoot reduces overload in individuals with knee osteoarthritis (OA). As such, the aim of this study was to evaluate the effects of inexpensive, flexible, non-heeled footwear (Moleca (R)) as compared with a modern heeled shoes and walking barefoot on the knee adduction moment (KAM) during gait in elderly women with and without knee OA. The gait of 45 elderly women between 60 and 70 years of age was evaluated. Twenty-one had knee OR graded 2 or 3 according to Kellgren and Lawrence`s criteria, and 24 who had no OA comprised the control group (CG). The gait conditions were: barefoot, Moleca (R), and modern heeled shoes. Three-dimensional kinematics and ground reaction forces were measured to calculate KAM by inverse dynamics. For both groups, the Moleca (R) provided peak KAM and KAM impulse similar to barefoot walking. For the OA group, the Moleca (R) reduced KAM even more as compared to the barefoot condition during midstance. On the other hand, the modern heeled shoes increased this variable in both groups. Inexpensive, flexible, and non-heeled footwear provided loading on the knee joint similar to a barefoot gait and significant overload decreases in elderly women with and without knee OA, compared to modern heeled shoes. During midstance, the Moleca (R) also allowed greater reduction in the knee joint loads as compared to barefoot gait in elderly women with knee OA, with the further advantage of providing external foot protection during gait. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Purpose: The aim of this study was to assess the contributions of some prosthetic parameters such as crown-to-implant (C/I) ratio, retention system, restorative material, and occlusal loading on stress concentrations within a single posterior crown supported by a short implant. Materials and Methods: Computer-aided design software was used to create 32 finite element models of an atrophic posterior partially edentulous mandible with a single external-hexagon implant (5 mm wide × 7 mm long) in the first molar region. Finite element analysis software with a convergence analysis of 5% to mesh refinement was used to evaluate the effects of C/I ratio (1:1; 1.5:1; 2:1, or 2.5:1), prosthetic retention system (cemented or screwed), and restorative material (metal-ceramic or all ceramic). The crowns were loaded with simulated normal or traumatic occlusal forces. The maximum principal stress (σmax) for cortical and cancellous bone and von Mises stress (σvM) for the implant and abutment screw were computed and analyzed. The percent contribution of each variable to the stress concentration was calculated from the sum of squares analysis. Results: Traumatic occlusion and a high C/I ratio increased stress concentrations. The C/I ratio was responsible for 11.45% of the total stress in the cortical bone, whereas occlusal loading contributed 70.92% to the total stress in the implant. The retention system contributed 0.91% of the total stress in the cortical bone. The restorative material was responsible for only 0.09% of the total stress in the cancellous bone. Conclusion: Occlusal loading was the most important stress concentration factor in the finite element model of a single posterior crown supported by a short implant.
Resumo:
The Amazon basin is a region of constant scientific interest due to its environmental importance and its biodiversity and climate on a global scale. The seasonal variations in water volume are one of the examples of topics studied nowadays. In general, the variations in river levels depend primarily on the climate and physics characteristics of the corresponding basins. The main factor which influences the water level in the Amazon Basin is the intensive rainfall over this region as a consequence of the humidity of the tropical climate. Unfortunately, the Amazon basin is an area with lack of water level information due to difficulties in access for local operations. The purpose of this study is to compare and evaluate the Equivalent Water Height (Ewh) from GRACE (Gravity Recovery And Climate Experiment) mission, to study the connection between water loading and vertical variations of the crust due to the hydrologic. In order to achieve this goal, the Ewh is compared with in-situ information from limnimeter. For the analysis it was computed the correlation coefficients, phase and amplitude of GRACE Ewh solutions and in-situ data, as well as the timing of periods of drought in different parts of the basin. The results indicated that vertical variations of the lithosphere due to water mass loading could reach 7 to 5 cm per year, in the sedimentary and flooded areas of the region, where water level variations can reach 10 to 8 m.
Resumo:
OBJECTIVE: Immediate and early loading of dental implants can simplify treatment and increase overall patient satisfaction. The purpose of this 3-year prospective randomized-controlled multicenter study was to assess the differences in survival rates and bone level changes between immediately and early-loaded implants with a new chemically modified surface (SLActive). This investigation shows interim results obtained after 5 months. MATERIAL AND METHODS: Patients > or =18 years of age missing at least one tooth in the posterior maxilla or mandible were enrolled in the study. Following implant placement, patients received a temporary restoration either on the day of surgery (immediate loading) or 28-34 days after surgery (early loading); restorations consisted of single crowns or two to four unit fixed dental prostheses. Permanent restorations were placed 20-23 weeks following surgery. The primary efficacy variable was change in bone level (assessed by standardized radiographs) from baseline to 5 months; secondary variables included implant survival and success rates. RESULTS: A total of 266 patients were enrolled (118 males and 148 females), and a total of 383 implants were placed (197 and 186 in the immediate and early loading groups, respectively). Mean patient age was 46.3+/-12.8 years. After 5 months, implant survival rates were 98% in the immediate group and 97% in the early group. Mean bone level change from baseline was 0.81+/-0.89 mm in the immediate group and 0.56+/-0.73 mm in the early group (P<0.05). Statistical analysis revealed a significant center effect (P<0.0001) and a significant treatment x center interaction (P=0.008). CONCLUSIONS: The results suggested that Straumann implants with an SLActive can be used predictably in time-critical (early or immediate) loading treatment protocols when appropriate patient selection criteria are observed. The mean bone level changes observed from baseline to 5 months (0.56 and 0.81 mm) corresponded to physiological observations from other studies, i.e., were not clinically significant. The presence of a significant center effect and treatment x center interaction indicated that the differences in bone level changes between the two groups were center dependent.
Resumo:
Studies on the relationship between psychosocial determinants and HIV risk behaviors have produced little evidence to support hypotheses based on theoretical relationships. One limitation inherent in many articles in the literature is the method of measurement of the determinants and the analytic approach selected. ^ To reduce the misclassification associated with unit scaling of measures specific to internalized homonegativity, I evaluated the psychometric properties of the Reactions to Homosexuality scale in a confirmatory factor analytic framework. In addition, I assessed the measurement invariance of the scale across racial/ethnic classifications in a sample of men who have sex with men. The resulting measure contained eight items loading on three first-order factors. Invariance assessment identified metric and partial strong invariance between racial/ethnic groups in the sample. ^ Application of the updated measure to a structural model allowed for the exploration of direct and indirect effects of internalized homonegativity on unprotected anal intercourse. Pathways identified in the model show that drug and alcohol use at last sexual encounter, the number of sexual partners in the previous three months and sexual compulsivity all contribute directly to risk behavior. Internalized homonegativity reduced the likelihood of exposure to drugs, alcohol or higher numbers of partners. For men who developed compulsive sexual behavior as a coping strategy for internalized homonegativity, there was an increase in the prevalence odds of risk behavior. ^ In the final stage of the analysis, I conducted a latent profile analysis of the items in the updated Reactions to Homosexuality scale. This analysis identified five distinct profiles, which suggested that the construct was not homogeneous in samples of men who have sex with men. Lack of prior consideration of these distinct manifestations of internalized homonegativity may have contributed to the analytic difficulty in identifying a relationship between the trait and high-risk sexual practices. ^
Resumo:
We report numerical evidence of the effects of a periodic modulation in the delay time of a delayed dynamical system. By referring to a Mackey-Glass equation and by adding a modula- tion in the delay time, we describe how the solution of the system passes from being chaotic to shadow periodic states. We analyze this transition for both sinusoidal and sawtooth wave mod- ulations, and we give, in the latter case, the relationship between the period of the shadowed orbit and the amplitude of the modulation. Future goals and open questions are highlighted.
Resumo:
En esta tesis se aborda el estudio del proceso de isomerización del sistema molecular LiNC/LiCN tanto aislado como en presencia de un pulso láser aplicando la teoría del estado de transición (TST). Esta teoría tiene como pilar fundamental el hecho de que el conocimiento de la dinámica en las proximidades de un punto de silla de la superficie de energía potencial permite determinar los parámetros cinéticos de la reacción objeto de estudio. Históricamente, existen dos formulaciones de la teoría del estado de transición, la versión termodinámica de Eyring (Eyr38) y la visión dinámica de Wigner (Wig38). Ésta última ha sufrido recientemente un amplio desarrollo, paralelo a los avances en sistemas dinámicos que ha dado lugar a una formulación geométrica en el espacio de fases que sirve como base al trabajo desarrollado en esta tesis. Nos hemos centrado en abordar el problema desde una visión fundamentalmente práctica, ya que la teoría del estado de transición presenta una desventaja: su elevado coste computacional y de tiempo de cálculo. Dos han sido los principales objetivos de este trabajo. El primero de ellos ha sido sentar las bases teóricas y computacionales de un algoritmo eficiente que permita obtener las magnitudes fundamentales de la TST. Así, hemos adaptado con éxito un algoritmo computacional desarrollado en el ámbito de la mecánica celeste (Jor99), obteniendo un método rápido y eficiente para la obtención de los objetos geométricos que rigen la dinámica en el espacio de fases y que ha permitido calcular magnitudes cinéticas tales como el flujo reactivo, la densidad de estados de reactivos y productos y en última instancia la constante de velocidad. Dichos cálculos han sido comparados con resultados estadísticos (presentados en (Mül07)) lo cual nos ha permitido demostrar la eficacia del método empleado. El segundo objetivo de esta tesis, ha sido la evaluación de la influencia de los parámetros de un pulso electromagnético sobre la dinámica de reacción. Para ello se ha generalizado la metodología de obtención de la forma normal del hamiltoniano cuando el sistema químico es alterado mediante una perturbación temporal periódica. En este caso el punto fijo inestable en cuya vecindad se calculan los objetos geométricos de interés para la aplicación de la TST, se transforma en una órbita periódica del mismo periodo que la perturbación. Esto ha permitido la simulación de la reactividad en presencia de un pulso láser. Conocer el efecto de esta perturbación posibilita el control de la reactividad química. Además de obtener los objetos geométricos que rigen la dinámica en una cierta vecindad de la órbita periódica y que son la clave de la TST, se ha estudiado el efecto de los parámetros del pulso sobre la reactividad en el espacio de fases global así como sobre el flujo reactivo que atraviesa la superficie divisoria que separa reactivos de productos. Así, se ha puesto de manifiesto, que la amplitud del pulso es el parámetro más influyente sobre la reactividad química, pudiendo producir la aparición de flujos reactivos a energías inferiores a las de aparición del sistema aislado y el aumento del flujo reactivo a valores constantes de energía inicial. ABSTRACT We have studied the isomerization reaction LiNC/LiCN isolated and perturbed by a laser pulse. Transition State theory (TST) is the main tool we have used. The basis of this theory is knowing the dynamics close to a fixed point of the potential energy surface. It is possible to calculate kinetic magnitudes by knowing the dynamics in a neighbourhood of the fixed point. TST was first formulated in the 30's and there were 2 points of view, one thermodynamical by Eyring (Eyr38) and another dynamical one by Wigner (Wig38). The latter one has grown lately due to the growth of the dynamical systems leading to a geometrical view of the TST. This is the basis of the work shown in this thesis. As the TST has one main handicap: the high computational cost, one of the main goals of this work is to find an efficient method. We have adapted a methodology developed in the field of celestial mechanics (Jor99). The result: an efficient, fast and accurate algorithm that allows us to obtain the geometric objects that lead the dynamics close to the fixed point. Flux across the dividing surface, density of states and reaction rate coefficient have been calculated and compared with previous statistical results, (Mül07), leading to the conclusion that the method is accurate and good enough. We have widen the methodology to include a time dependent perturbation. If the perturbation is periodic in time, the fixed point becomes a periodic orbit whose period is the same as the period of the perturbation. This way we have been able to simulate the isomerization reaction when the system has been perturbed by a laser pulse. By knowing the effect of that perturbation we will be able to control the chemical reactivity. We have also studied the effect of the parameters on the global phase space dynamics and on the flux across the dividing surface. It has been prove that amplitude is the most influent parameter on the reaction dynamics. Increasing amplitude leads to greater fluxes and to some flux at energies it would not if the systems would not have been perturbed.
Resumo:
We present an experimental analysis of quadrature entanglement produced from a pair of amplitude squeezed beams. The correlation matrix of the state is characterized within a set of reasonable assumptions, and the strength of the entanglement is gauged using measures of the degree of inseparability and the degree of Einstein-Podolsky-Rosen (EPR) paradox. We introduce controlled decoherence in the form of optical loss to the entangled state, and demonstrate qualitative differences in the response of the degrees of inseparability and EPR paradox to this loss. The entanglement is represented on a photon number diagram that provides an intuitive and physically relevant description of the state. We calculate efficacy contours for several quantum information protocols on this diagram, and use them to predict the effectiveness of our entanglement in those protocols.
Resumo:
A comparison of a constant (continuous delivery of 4% FiO(2)) and a variable (initial 5% FiO(2) with adjustments to induce low amplitude EEG (LAEEG) and hypotension) hypoxic/ischemic insult was performed to determine which insult was more effective in producing a consistent degree of survivable neuropathological damage in a newborn piglet model of perinatal asphyxia. We also examined which physiological responses contributed to this outcome. Thirty-nine 1-day-old piglets were subjected to either a constant hypoxic/ischemic insult of 30- to 37-min duration or a variable hypoxic/ischemic insult of 30-min low peak amplitude EEG (LAEEG < 5 mu V) including 10 min of low mean arterial blood pressure (MABP < 70% of baseline). Control animals (n = 6) received 21% FiO(2) for the duration of the experiment. At 72 h, the piglets were euthanased, their brains removed and fixed in 4% paraformaldehyde and assessed for hypoxic/ischemic injury by histological analysis. Based on neuropathology scores, piglets were grouped as undamaged or damaged; piglets that did not survive to 72 h were grouped separately as dead. The variable insult resulted in a greater number of piglets with neuropathological damage (undamaged = 12.5%, damaged = 68.75%, dead = 18.75%) while the constant insult resulted in a large proportion of undamaged piglets (undamaged = 50%, damaged = 22.2%, dead = 27.8%). A hypoxic insult varied to maintain peak amplitude EEG < 5 mu V results in a greater number of survivors with a consistent degree of neuropathological damage than a constant hypoxic insult. Physiological variables MABP, LAEEG, pH and arterial base excess were found to be significantly associated with neuropathological outcome. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The results of two experiments are reported that examined how performance in a simple interceptive action (hitting a moving target) was influenced by the speed of the target, the size of the intercepting effector and the distance moved to make the interception. In Experiment 1, target speed and the width of the intercepting manipulandum (bat) were varied. The hypothesis that people make briefer movements, when the temporal accuracy and precision demands of the task are high, predicts that bat width and target speed will divisively interact in their effect on movement time (MT) and that shorter MTs will be associated with a smaller temporal variable error (VE). An alternative hypothesis that people initiate movement when the rate of expansion (ROE) of the target's image reaches a specific, fixed criterion value predicts that bat width will have no effect on MT. The results supported the first hypothesis: a statistically reliable interaction of the predicted form was obtained and the temporal VE was smaller for briefer movements. In Experiment 2, distance to move and target speed were varied. MT increased in direct proportion to distance and there was a divisive interaction between distance and speed; as in Experiment 1, temporal VE was smaller for briefer movements. The pattern of results could not be explained by the strategy of initiating movement at a fixed value of the ROE or at a fixed value of any other perceptual variable potentially available for initiating movement. It is argued that the results support pre-programming of MT with movement initiated when the target's time to arrival at the interception location reaches a criterion value that is matched to the pre-programmed MT. The data supported completely open-loop control when MT was less than between 200 and 240 ms with corrective sub-movements increasingly frequent for movements of longer duration.
Resumo:
La presente Tesi ha per oggetto lo sviluppo e la validazione di nuovi criteri per la verifica a fatica multiassiale di componenti strutturali metallici . In particolare, i nuovi criteri formulati risultano applicabili a componenti metallici, soggetti ad un’ampia gamma di configurazioni di carico: carichi multiassiali variabili nel tempo, in modo ciclico e random, per alto e basso/medio numero di cicli di carico. Tali criteri costituiscono un utile strumento nell’ambito della valutazione della resistenza/vita a fatica di elementi strutturali metallici, essendo di semplice implementazione, e richiedendo tempi di calcolo piuttosto modesti. Nel primo Capitolo vengono presentate le problematiche relative alla fatica multiassiale, introducendo alcuni aspetti teorici utili a descrivere il meccanismo di danneggiamento a fatica (propagazione della fessura e frattura finale) di componenti strutturali metallici soggetti a carichi variabili nel tempo. Vengono poi presentati i diversi approcci disponibili in letteratura per la verifica a fatica multiassiale di tali componenti, con particolare attenzione all'approccio del piano critico. Infine, vengono definite le grandezze ingegneristiche correlate al piano critico, utilizzate nella progettazione a fatica in presenza di carichi multiassiali ciclici per alto e basso/medio numero di cicli di carico. Il secondo Capitolo è dedicato allo sviluppo di un nuovo criterio per la valutazione della resistenza a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato in termini di tensioni. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2011. In particolare, il primo intervento riguarda la determinazione della giacitura del piano critico: nuove espressioni dell'angolo che lega la giacitura del piano critico a quella del piano di frattura vengono implementate nell'algoritmo del criterio. Il secondo intervento è relativo alla definizione dell'ampiezza della tensione tangenziale e un nuovo metodo, noto come Prismatic Hull (PH) method (di Araújo e collaboratori), viene implementato nell'algoritmo. L'affidabilità del criterio viene poi verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura. Nel terzo Capitolo viene proposto un criterio di nuova formulazione per la valutazione della vita a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e basso/medio numero di cicli. Il criterio risulta basato sull'approccio del piano critico, ed è formulato in termini di deformazioni. In particolare, la formulazione proposta trae spunto, come impostazione generale, dal criterio di fatica multiassiale in regime di alto numero di cicli discusso nel secondo Capitolo. Poiché in presenza di deformazioni plastiche significative (come quelle caratterizzanti la fatica per basso/medio numero di cicli di carico) è necessario conoscere il valore del coefficiente efficace di Poisson del materiale, vengono impiegate tre differenti strategie. In particolare, tale coefficiente viene calcolato sia per via analitica, che per via numerica, che impiegando un valore costante frequentemente adottato in letteratura. Successivamente, per validarne l'affidabilità vengono impiegati numerosi dati di prove sperimentali disponibili in letteratura; i risultati numerici sono ottenuti al variare del valore del coefficiente efficace di Poisson. Inoltre, al fine di considerare i significativi gradienti tensionali che si verificano in presenza di discontinuità geometriche, come gli intagli, il criterio viene anche esteso al caso dei componenti strutturali intagliati. Il criterio, riformulato implementando il concetto del volume di controllo proposto da Lazzarin e collaboratori, viene utilizzato per stimare la vita a fatica di provini con un severo intaglio a V, realizzati in lega di titanio grado 5. Il quarto Capitolo è rivolto allo sviluppo di un nuovo criterio per la valutazione del danno a fatica di elementi strutturali metallici soggetti a carichi multiassiali random e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato nel dominio della frequenza. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2014. In particolare, l’intervento riguarda la determinazione della giacitura del piano critico, e nuove espressioni dell'angolo che lega la giacitura del piano critico con quella del piano di frattura vengono implementate nell'algoritmo del criterio. Infine, l’affidabilità del criterio viene verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura.