992 resultados para SHTB impact experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have shown that the Indo-Pacific atmospheric response to ENSO comprises two dominant modes of variability: a meridionally quasi-symmetric response (independent from the annual cycle) and an anti-symmetric response (arising from the nonlinear atmospheric interaction between ENSO variability and the annual cycle), referred to as the combination mode (C-Mode). This study demonstrates that the direct El Niño signal over the tropics is confined to the equatorial region and has no significant impact on the atmospheric response over East Asia. The El Niño-associated equatorial anomalies can be expanded towards off-equatorial regions by the C-Mode through ENSO’s interaction with the annual cycle. The C-Mode is the prime driver for the development of an anomalous low-level anticyclone over the western North Pacific (WNP) during the El Niño decay phase, which usually transports more moisture to East Asia and thereby causes more precipitation over southern China. We use an Atmospheric General Circulation Model that well reproduces the WNP anticyclonic anomalies when both El Niño sea surface temperature (SST) anomalies as well as the SST annual cycle are prescribed as boundary conditions. However, no significant WNP anticyclonic circulation anomaly appears during the El Niño decay phase when excluding the SST annual cycle. Our analyses of observational data and model experiments suggest that the annual cycle plays a key role in the East Asian climate anomalies associated with El Niño through their nonlinear atmospheric interaction. Hence, a realistic simulation of the annual cycle is crucial in order to correctly capture the ENSO-associated climate anomalies over East Asia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current methods for initialising coupled atmosphere-ocean forecasts often rely on the use of separate atmosphere and ocean analyses, the combination of which can leave the coupled system imbalanced at the beginning of the forecast, potentially accelerating the development of errors. Using a series of experiments with the European Centre for Medium-range Weather Forecasts coupled system, the magnitude and extent of these so-called initialisation shocks is quantified, and their impact on forecast skill measured. It is found that forecasts initialised by separate ocean and atmospheric analyses do exhibit initialisation shocks in lower atmospheric temperature, when compared to forecasts initialised using a coupled data assimilation method. These shocks result in as much as a doubling of root-mean-square error on the first day of the forecast in some regions, and in increases that are sustained for the duration of the 10-day forecasts performed here. However, the impacts of this choice of initialisation on forecast skill, assessed using independent datasets, were found to be negligible, at least over the limited period studied. Larger initialisation shocks are found to follow a change in either the atmospheric or ocean model component between the analysis and forecast phases: changes in the ocean component can lead to sea surface temperature shocks of more than 0.5K in some equatorial regions during the first day of the forecast. Implications for the development of coupled forecast systems, particularly with respect to coupled data assimilation methods, are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Westerly wind bursts (WWBs) that occur in the western tropical Pacific are believed to play an important role in the development of El Niño events. Here, following the study of Lengaigne et al. (Clim Dyn 23(6):601–620, 2004), we conduct numerical simulations in which we reexamine the response of the climate system to an observed wind burst added to a coupled general circulation model. Two sets of twin ensemble experiments are conducted (each set has control and perturbed experiments). In the first set, the initial ocean heat content of the system is higher than the model climatology (recharged), while in the second set it is nearly normal (neutral). For the recharged state, in the absence of WWBs, a moderate El Niño with a maximum warming in the central Pacific (CP) develops in about a year. In contrast, for the neutral state, there develops a weak La Niña. However, when the WWB is imposed, the situation dramatically changes: the recharged state slides into an El Niño with a maximum warming in the eastern Pacific, while the neutral set produces a weak CP El Niño instead of previous La Niña conditions. The different response of the system to the exact same perturbations is controlled by the initial state of the ocean and the subsequent ocean–atmosphere interactions involving the interplay between the eastward shift of the warm pool and the warming of the eastern equatorial Pacific. Consequently, the observed diversity of El Niño, including the occurrence of extreme events, may depend on stochastic atmospheric processes, modulating El Niño properties within a broad continuum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of extreme sea ice initial conditions on modelled climate is analysed for a fully coupled atmosphere ocean sea ice general circulation model, the Hadley Centre climate model HadCM3. A control run is chosen as reference experiment with greenhouse gas concentration fixed at preindustrial conditions. Sensitivity experiments show an almost complete recovery from total removal or strong increase of sea ice after four years. Thus, uncertainties in initial sea ice conditions seem to be unimportant for climate modelling on decadal or longer time scales. When the initial conditions of the ocean mixed layer were adjusted to ice-free conditions, a few substantial differences remained for more than 15 model years. But these differences are clearly smaller than the uncertainty of the HadCM3 run and all the other 19 IPCC fourth assessment report climate model preindustrial runs. It is an important task to improve climate models in simulating the past sea ice variability to enable them to make reliable projections for the 21st century.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sea ice export from the Arctic is of global importance due to its fresh water which influences the oceanic stratification and, thus, the global thermohaline circulation. This study deals with the effect of cyclones on sea ice and sea ice transport in particular on the basis of observations from two field experiments FRAMZY 1999 and FRAMZY 2002 in April 1999 and March 2002 as well as on the basis of simulations with a numerical sea ice model. The simulations realised by a dynamic-thermodynamic sea ice model are forced with 6-hourly atmospheric ECMWF- analyses (European Centre for Medium-Range Weather Forecasts) and 6-hourly oceanic data of a MPI-OM-simulation (Max-Planck-Institute Ocean Model). Comparing the observed and simulated variability of the sea ice drift and of the position of the ice edge shows that the chosen configuration of the model is appropriate for the performed studies. The seven observed cyclones change the position of the ice edge up to 100 km and cause an extensive decrease of sea ice coverage by 2 % up to more than 10 %. The decrease is only simulated by the model if the ocean current is strongly divergent in the centre of the cyclone. The impact is remarkable of the ocean current on divergence and shear deformation of the ice drift. As shown by sensitivity studies the ocean current at a depth of 6 m – the sea ice model is forced with – is mainly responsible for the ascertained differences between simulation and observation. The simulated sea ice transport shows a strong variability on a time scale from hours to days. Local minima occur in the time series of the ice transport during periods with Fram Strait cyclones. These minima are not caused by the local effect of the cyclone’s wind field, but mainly by the large-scale pattern of surface pressure. A displacement of the areas of strongest cyclone activity in the Nordic Seas would considerably influence the ice transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Throughout dairy cows evolution, milk production was always the key point to select the superior animal. Currently, several evidences has shown that high milk production have intensively contributed to the decline of dairy cattle fertility. Beyond milk production, dairy cows have their reproductive performance impaired by another factors, heat stress and repeat-breeding. Methods like fixed time artificial insemination and embryo transfer were developed to minimize the effects of these factors, and improve dairy herds profitability. This review aims to show some key-point experiments conducted to improve the efficiency of the self-appointed protocols for artificial insemination and embryo transfer in Brazil, overcoming several reproductive problems. Our goal is to develop cheap and easy self-appointed programs that facilitate animal handling and maximize their reproductive outcomes all over the year. Review: Failure in estrus detection is the mainly limiting factor for the use of artificial insemination in high-production dairy herd. An excellent alternative to overcome the need of estrus detection is fixed time artificial insemination. Many protocols with and without the use of estradiol have been developed to that end. Among the protocols for fixed time artificial insemination without estradiol, DoubleOvsynch has been extensively used recently in American dairy herds. In Brazil, similar pregnancy rate was obtained compared to progesterone-estradiol based protocols for fixed time artificial insemination. Particularities of progesterone-estradiol based protocols as (1) new progesterone device or devices previously used for eight days; (2) different doses of eCG; and (3) the use of estradiol cypionate for fixed time artificial insemination have been studied in Brazil. The use of self-appointed artificial insemination also enabled the reduction of the interval calving-conception compared to cows inseminated following the standing estrus. Regarding the low fertility of repeat breeders and the effect of heat stress at early pregnancy, other methods like embryo transfer became important tools to enhance reproductive efficiency of Brazilian dairy herds. Protocols were also developed to allow fixed time embryo transfer, eliminating the need of estrus detection and improving the reproductive efficiency of lactating recipients. As well as described for fixed time artificial insemination treatments, there is a large variety of hormone combination for fixed time embryo transfer (with and without estradiol). An experiment conducted in Brazil demonstrated that protocols for fixed time embryo transfer without estradiol can be as good as with estradiol to synchronize high-producing Holstein recipients, essentially during summer. Particularities related to embryos cryopreservation, synchronization of the estrus cycle of donors and recipients and the site of embryo release into the uterine horn were also investigated. Greater conception rates were achieved when fresh embryos were transferred compared to frozen-thawed ones. Also, the tight synchronization between donor and recipient (same day of estrus) resulted more pregnancies than when recipients were one day later or in advantage in relation to donors. Moreover, the site of embryo release into the uterine horn (ipsilateral to the corpus luteum) had no effect on pregnancy rates after in vivo produced embryo transfer. Conclusion: Both fixed time artificial insemination and fixed time embryo transfer are important tools to improve reproductive efficiency of high-producing dairy cows. These biotechnologies help bypassing some of the greatest challenges of dairy cattle reproduction: the difficulties of estrus detection, and the low fertility associated to heat stress and repeat breeding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, L1 SBAS signals can be used in a combined GPS+SBAS data processing. However, such situation restricts the studies over short baselines. Besides of increasing the satellite availability, SBAS satellites orbit configuration is different from that of GPS. In order to analyze how these characteristics can impact GPS positioning in the southeast area of Brazil, experiments involving GPS-only and combined GPS+SBAS data were performed. Solutions using single point and relative positioning were computed to show the impact over satellite geometry, positioning accuracy and short baseline ambiguity resolution. Results showed that the inclusion of SBAS satellites can improve the accuracy of positioning. Nevertheless, the bad quality of the data broadcasted by these satellites limits their usage. © Springer-Verlag Berlin Heidelberg 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Although hypercaloric interventions are associated with nutritional, endocrine, metabolic, and cardiovascular disorders in obesity experiments, a rational distinction between the effects of excess adiposity and the individual roles of dietary macronutrients in relation to these disturbances has not previously been studied. This investigation analyzed the correlation between ingested macronutrients (including sucrose and saturated and unsaturated fatty acids) plus body adiposity and metabolic, hormonal, and cardiovascular effects in rats with diet-induced obesity. Methods: Normotensive Wistar-Kyoto rats were submitted to Control (CD; 3.2 Kcal/g) and Hypercaloric (HD; 4.6 Kcal/g) diets for 20 weeks followed by nutritional evaluation involving body weight and adiposity measurement. Metabolic and hormonal parameters included glycemia, insulin, insulin resistance, and leptin. Cardiovascular analysis included systolic blood pressure profile, echocardiography, morphometric study of myocardial morphology, and myosin heavy chain (MHC) protein expression. Canonical correlation analysis was used to evaluate the relationships between dietary macronutrients plus adiposity and metabolic, hormonal, and cardiovascular parameters. Results: Although final group body weights did not differ, HD presented higher adiposity than CD. Diet induced hyperglycemia while insulin and leptin levels remained unchanged. In a cardiovascular context, systolic blood pressure increased with time only in HD. Additionally, in vivo echocardiography revealed cardiac hypertrophy and improved systolic performance in HD compared to CD; and while cardiomyocyte size was unchanged by diet, nuclear volume and collagen interstitial fraction both increased in HD. Also HD exhibited higher relative β-MHC content and β/α-MHC ratio than their Control counterparts. Importantly, body adiposity was weakly associated with cardiovascular effects, as saturated fatty acid intake was directly associated with most cardiac remodeling measurements while unsaturated lipid consumption was inversely correlated with these effects. Conclusion: Hypercaloric diet was associated with glycemic metabolism and systolic blood pressure disorders and cardiac remodeling. These effects directly and inversely correlated with saturated and unsaturated lipid consumption, respectively. © 2013 Oliveira Junior et al.; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sugarcane borer, Diatraea saccharalis (F.) (Lepidoptera: Crambidae), is a key pest of sugarcane (Saccharum spp.). While damage caused by this pest has increased in the past 20 yr, studies investigating the insect-plant interactions are still lacking. Moreover, there is no information about the consequences of borer damage on the parameters of sugar quality. Therefore, two field experiments were performed during the 2010 and 2011 growing seasons in Brazil to compare the raw material and sugar quality of SP80-3280 sugarcane plants with and without the sugarcane borer. Plants were protected within screen cages and infested weekly during the 2010 and 2011 seasons, using egg masses starting at the second and third internode stage. At harvest, 25.77 and 19.01% of the internodes were bored by larvae (infestation intensity, II) in the first and second seasons, respectively. There was no correlation between the borer gallery total volume and II. The fiber content significantly increased with increasing II. The stalk biometric parameters, such as length, diameter, and yield, were not correlated with II. The sucrose yield significantly decreased with increasing II. Consequently, sugar yield losses were estimated at 8.83 and 19.80% per 1% bored internode for the first and second seasons, respectively. The concentration of phenolic compounds increased, and unclarified juice color quality decreased, with increasing II. Significant differences were detected in the quality of the sugar. These results should be confirmed for other sugarcane cultivars and incorporated into an economic injury level to enhance decision-making strategies for borer management. © 2013 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciências da Motricidade - IBRC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.