976 resultados para Monte-carlo Simulations
Resumo:
The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.
Resumo:
This study considers a dual-hop cognitive inter-vehicular relay-assisted communication system where all
communication links are non-line of sight ones and their fading is modelled by the double Rayleigh fading distribution.
Road-side relays (or access points) implementing the decode-and-forward relaying protocol are employed and one of
them is selected according to a predetermined policy to enable communication between vehicles. The performance of
the considered cognitive cooperative system is investigated for Kth best partial and full relay selection (RS) as well as
for two distinct fading scenarios. In the first scenario, all channels are double Rayleigh distributed. In the second
scenario, only the secondary source to relay and relay to destination channels are considered to be subject to double
Rayleigh fading whereas, channels between the secondary transmitters and the primary user are modelled by the
Rayleigh distribution. Exact and approximate expressions for the outage probability performance for all considered RS
policies and fading scenarios are presented. In addition to the analytical results, complementary computer simulated
performance evaluation results have been obtained by means of Monte Carlo simulations. The perfect match between
these two sets of results has verified the accuracy of the proposed mathematical analysis.
Resumo:
Purpose: The purpose of this work is to investigate the radiosensitizing effect of gold nanoparticle (GNP) induced vasculature damage for proton, megavoltage (MV) photon, and kilovoltage (kV) photon irradiation. Methods: Monte Carlo simulations were carried out using tool for particle simulation (TOPAS) to obtain the spatial dose distribution in close proximity up to 20 µm from the GNPs. The spatial dose distribution from GNPs was used as an input to calculate the dose deposited to the blood vessels. GNP induced vasculature damage was evaluated for three particle sources (a clinical spread out Bragg peak proton beam, a 6 MV photon beam, and two kV photon beams). For each particle source, various depths in tissue, GNP sizes (2, 10, and 20 nm diameter), and vessel diameters (8, 14, and 20 µm) were investigated. Two GNP distributions in lumen were considered, either homogeneously distributed in the vessel or attached to the inner wall of the vessel. Doses of 30 Gy and 2 Gy were considered, representing typical in vivo enhancement studies and conventional clinical fractionation, respectively. Results: These simulations showed that for 20 Au-mg/g GNP blood concentration homogeneously distributed in the vessel, the additional dose at the inner vascular wall encircling the lumen was 43% of the prescribed dose at the depth of treatment for the 250 kVp photon source, 1% for the 6 MV photon source, and 0.1% for the proton beam. For kV photons, GNPs caused 15% more dose in the vascular wall for 150 kVp source than for 250 kVp. For 6 MV photons, GNPs caused 0.2% more dose in the vascular wall at 20 cm depth in water as compared to at depth of maximum dose (Dmax). For proton therapy, GNPs caused the same dose in the vascular wall for all depths across the spread out Bragg peak with 12.7 cm range and 7 cm modulation. For the same weight of GNPs in the vessel, 2 nm diameter GNPs caused three times more damage to the vessel than 20 nm diameter GNPs. When the GNPs were attached to the inner vascular wall, the damage to the inner vascular wall can be up to 207% of the prescribed dose for the 250 kVp photon source, 4% for the 6 MV photon source, and 2% for the proton beam. Even though the average dose increase from the proton beam and MV photon beam was not large, there were high dose spikes that elevate the local dose of the parts of the blood vessel to be higher than 15 Gy even for 2 Gy prescribed dose, especially when the GNPs can be actively targeted to the endothelial cells. Conclusions: GNPs can potentially be used to enhance radiation therapy by causing vasculature damage through high dose spikes caused by the addition of GNPs especially for hypofractionated treatment. If GNPs are designed to actively accumulate at the tumor vasculature walls, vasculature damage can be increased significantly. The largest enhancement is seen using kilovoltage photons due to the photoelectric effect. Although no significant average dose enhancement was observed for the whole vasculature structure for both MV photons and protons, they can cause high local dose escalation (>15 Gy) to areas of the blood vessel that can potentially contribute to the disruption of the functionality of the blood vessels in the tumor.
Resumo:
In this paper, we consider the transmission of confidential information over a κ-μ fading channel in the presence of an eavesdropper who also experiences κ-μ fading. In particular, we obtain novel analytical solutions for the probability of strictly positive secrecy capacity (SPSC) and a lower bound of secure outage probability (SOPL) for independent and non-identically distributed channel coefficients without parameter constraints. We also provide a closed-form expression for the probability of SPSC when the μ parameter is assumed to take positive integer values. Monte-Carlo simulations are performed to verify the derived results. The versatility of the κ-μ fading model means that the results presented in this paper can be used to determine the probability of SPSC and SOPL for a large number of other fading scenarios, such as Rayleigh, Rice (Nakagamin), Nakagami-m, One-Sided Gaussian, and mixtures of these common fading models. In addition, due to the duality of the analysis of secrecy capacity and co-channel interference (CCI), the results presented here will have immediate applicability in the analysis of outage probability in wireless systems affected by CCI and background noise (BN). To demonstrate the efficacy of the novel formulations proposed here, we use the derived equations to provide a useful insight into the probability of SPSC and SOPL for a range of emerging wireless applications, such as cellular device-to-device, peer-to-peer, vehicle-to-vehicle, and body centric communications using data obtained from real channel measurements.
Resumo:
The measurement of fast changing temperature fluctuations is a challenging problem due to the inherent limited bandwidth of temperature sensors. This results in a measured signal that is a lagged and attenuated version of the input. Compensation can be performed provided an accurate, parameterised sensor model is available. However, to account for the in influence of the measurement environment and changing conditions such as gas velocity, the model must be estimated in-situ. The cross-relation method of blind deconvolution is one approach for in-situ characterisation of sensors. However, a drawback with the method is that it becomes positively biased and unstable at high noise levels. In this paper, the cross-relation method is cast in the discrete-time domain and a bias compensation approach is developed. It is shown that the proposed compensation scheme is robust and yields unbiased estimates with lower estimation variance than the uncompensated version. All results are verified using Monte-Carlo simulations.
Resumo:
The measurement of fast changing temperature fluctuations is a challenging problem due to the inherent limited bandwidth of temperature sensors. This results in a measured signal that is a lagged and attenuated version of the input. Compensation can be performed provided an accurate, parameterised sensor model is available. However, to account for the influence of the measurement environment and changing conditions such as gas velocity, the model must be estimated in-situ. The cross-relation method of blind deconvolution is one approach for in-situ characterisation of sensors. However, a drawback with the method is that it becomes positively biased and unstable at high noise levels. In this paper, the cross-relation method is cast in the discrete-time domain and a bias compensation approach is developed. It is shown that the proposed compensation scheme is robust and yields unbiased estimates with lower estimation variance than the uncompensated version. All results are verified using Monte-Carlo simulations.
Resumo:
Loess is the most important collapsible soil; possibly the only engineering soil in which real collapse occurs. A real collapse involves a diminution in volume - it would be an open metastable packing being reduced to a more closely packed, more stable structure. Metastability is at the heart of the collapsible soils problem. To envisage and to model the collapse process in a metastable medium, knowledge is required about the nature and shape of the particles, the types of packings they assume (real and ideal), and the nature of the collapse process - a packing transition upon a change to the effective stress in a media of double porosity. Particle packing science has made little progress in geoscience discipline - since the initial packing paradigms set by Graton and Fraser (1935) - nevertheless is relatively well-established in the soft matter physics discipline. The collapse process can be represented by mathematical modelling of packing – including the Monte Carlo simulations - but relating representation to process remains difficult. This paper revisits the problem of sudden packing transition from a micro-physico-mechanical viewpoint (i.e. collapse imetan terms of structure-based effective stress). This cross-disciplinary approach helps in generalization on collapsible soils to be made that suggests loess is the only truly collapsible soil, because it is only loess which is so totally influenced by the packing essence of the formation process.
Resumo:
In this thesis the low-temperature magnetism of the spin-ice systems Dy2Ti2O7 and Ho2Ti2O7 is investigated. In general, a clear experimental evidence for a sizable magnetic contribution kappa_{mag} to the low-temperature, zero-field heat transport of both spin-ice materials is observed. This kappa_{mag} can be attributed to the magnetic monopole excitations, which are highly mobile in zero field and are suppressed by a rather small external field resulting in a drop of kappa(H). Towards higher magnetic fields, significant field dependencies of the phononic heat conductivities kappa_{ph}(H) of Ho2Ti2O7 and Dy2Ti2O7 are found, which are, however, of opposite signs, as it is also found for the highly dilute reference materials (Ho0.5Y0.5)2Ti2O7 and (Dy0.5Y0.5)2Ti2O7. The dominant effect in the Ho-based materials is the scattering of phonons by spin flips which appears to be significantly stronger than in the Dy-based materials. Here, the thermal conductivity is suppressed due to enhanced lattice distortions observed in the magnetostriction. Furthermore, the thermal conductivity of Dy2Ti2O7 has been investigated concerning strong hysteresis effects and slow-relaxation processes towards equilibrium states in the low-temperature and low-field regime. The thermal conductivity in the hysteretic regions slowly relaxes towards larger values suggesting that there is an additional suppression of the heat transport by disorder in the non-equilibrium states. The equilibration can even be governed by the heat current for particular configurations. A special focus was put on the dilution series Dy2Ti2O7x. From specific heat measurements, it was found that the ultra-slow thermal equilibration in pure spin ice Dy2Ti2O7 is rapidly suppressed upon dilution with non-magnetic yttrium and vanishes completely for x>=0.2 down to the lowest accessible temperatures. In general, the low-temperature entropy of (Dy1-xYx)2Ti2O7, considerably decreases with increasing x, whereas its temperature-dependence drastically increases. Thus, it could be clarified that there is no experimental evidence for a finite zero-temperature entropy in (Dy1-xYx)2Ti2O7 above x>=0.2, in clear contrast to the finite residual entropy S_{P}(x) expected from a generalized Pauling approximation. A similar discrepancy is also present between S_{P}(x) and the low-temperature entropy obtained by Monte Carlo simulations, which reproduce the experimental data from 25 K down to 0.7 K, whereas the data at 0.4 K are overestimated. A straightforward description of the field-dependence kappa(H) of the dilution series with qualitative models justifies the extraction of kappa_{mag}. It was observed that kappa_{mag} systematically scales with the degree of dilution and its low-field decrease is related to the monopole excitation energy. The diffusion coefficient D_{mag} for the monopole excitations was calculated by means of c_{mag} and kappa_{mag}. It exhibits a broad maximum around 1.6 K and is suppressed for T<=0.5 K, indicating a non-degenerate ground state in the long-time limit, and in the high-temperature range for T>=4 K where spin-ice physics is eliminated. A mean-free path of 0.3 mum is obtained for Dy2Ti2O7 at about 1 K within the kinetic gas theory.
Resumo:
Résumé: Ce mémoire de maîtrise est une étude des probabilités d’interactions (sections efficaces) des électrons de basse énergie avec une molécule d’intérêt biologique. Cette molécule est le tétrahydrofurane (THF) qui est un bon modèle de la molécule constituant la colonne vertébrale de l’ADN; le désoxyribose. Étant donné la grande quantité d’électrons secondaires libérés lors du passage des radiations à travers la matière biologique et sachant que ceux-ci déposent la majorité de l’énergie, l’étude de leurs interactions avec les molécules constituant l’ADN devient rapidement d’une grande importance. Les mesures de sections efficaces sont faites à l’aide d’un spectromètre à haute résolution de pertes d’énergie de l’électron. Les spectres de pertes d’énergie de l’électron obtenus de cet appareil permettent de calculer les valeurs de sections efficaces pour chaque vibration en fonction de l’énergie incidente de l’électron. L’article présenté dans ce mémoire traite de ces mesures et des résultats. En effet, il présente et explique en détail les conditions expérimentales, il décrit la méthode de déconvolution qui est utilisée pour obtenir les valeurs de sections efficaces et il présente et discute des 4 résonances observées dans la dépendance en énergie des sections efficaces. En effet, cette étude a permis de localiser en énergie 4 résonances et celles-ci ont toutes été confirmées par des recherches expérimentales et théoriques antérieures sur le sujet des collisions électrons lents-THF. En outre, jamais ces résonances n’avaient été observées simultanément dans une même étude et jamais la résonance trouvée à basse énergie n’avait été observée avec autant d’intensité que cette présente étude. Cette étude a donc permis de raffiner notre compréhension fondamentale des processus résonants impliqués lors de collisions d’électrons secondaires avec le THF. Les valeurs de sections efficaces sont, quant à elles, très prisées par les théoriciens et sont nécessaires pour les simulations Monte Carlo pour prédire, par exemple, le nombre d’ions formées après le passage des radiations. Ces valeurs pourront justement être utilisées dans les modèles de distribution et dépôt d’énergie au niveau nanoscopique dans les milieux biologiques et ceux-ci pourront éventuellement améliorer l’efficacité des modalités radiothérapeutiques.
Resumo:
Au cours de ces dernières années, les techniques d’échantillonnage équilibré ont connu un regain d’intérêt. En effet, ces techniques permettent de reproduire la structure de la population dans des échantillons afin d’améliorer l’efficacité des estimations. La reproduction de cette structure est effectuée par l’introduction des contraintes aux plans de sondage. Encore récemment, des nouvelles procédures d’échantillonnage équilibré ont été proposées. Il s’agit notamment de la méthode du cube présentée par Deville et Tillé (2004) et de l’algorithme réjectif de Fuller (2009). Alors que la première est une méthode exacte de sélection, la seconde est une approche approximative qui admet une certaine tolérance dans la sélection. Alors, après une brève présentation de ces deux méthodes dans le cadre d’un inventaire de pêcheurs, nous comparons à l’aide de simulations Monte Carlo, les plans de sondage produits par ces deux méthodes. Aussi, cela a été l’occasion pour nous de vérifier si ces méthodes modifient les probabilités de sélection des unités.
Resumo:
Frustrated systems, typically characterized by competing interactions that cannot all be simultaneously satisfied, are ubiquitous in nature and display many rich phenomena and novel physics. Artificial spin ices (ASIs), arrays of lithographically patterned Ising-like single-domain magnetic nanostructures, are highly tunable systems that have proven to be a novel method for studying the effects of frustration and associated properties. The strength and nature of the frustrated interactions between individual magnets are readily tuned by design and the exact microstate of the system can be determined by a variety of characterization techniques. Recently, thermal activation of ASI systems has been demonstrated, introducing the spontaneous reversal of individual magnets and allowing for new explorations of novel phase transitions and phenomena using these systems. In this work, we introduce a new, robust material with favorable magnetic properties for studying thermally active ASI and use it to investigate a variety of ASI geometries. We reproduce previously reported perfect ground-state ordering in the square geometry and present studies of the kagome lattice showing the highest yet degree of ordering observed in this fully frustrated system. We consider theoretical predictions of long-range order in ASI and use both our experimental studies and kinetic Monte Carlo simulations to evaluate these predictions. Next, we introduce controlled topological defects into our square ASI samples and observe a new, extended frustration effect of the system. When we introduce a dislocation into the lattice, we still see large domains of ground-state order, but, in every sample, a domain wall containing higher energy spin arrangements originates from the dislocation, resolving a discontinuity in the ground-state order parameter. Locally, the magnets are unfrustrated, but frustration of the lattice persists due to its topology. We demonstrate the first direct imaging of spin configurations resulting from topological frustration in any system and make predictions on how dislocations could affect properties in numerous materials systems.
Resumo:
In 2014, the Australian Government implemented the Emissions Reduction Fund to offer incentives for businesses to reduce greenhouse gas (GHG) emissions by following approved methods. Beef cattle businesses in northern Australia can participate by applying the 'reducing GHG emissions by feeding nitrates to beef cattle' methodology and the 'beef cattle herd management' methods. The nitrate (NO3) method requires that each baseline area must demonstrate a history of urea use. Projects earn Australian carbon credit units (ACCU) for reducing enteric methane emissions by substituting NO3 for urea at the same amount of fed nitrogen. NO3 must be fed in the form of a lick block because most operations do not have labour or equipment to manage daily supplementation. NO3 concentrations, after a 2-week adaptation period, must not exceed 50 g NO3/adult animal equivalent per day or 7 g NO3/kg dry matter intake per day to reduce the risk of NO3 toxicity. There is also a 'beef cattle herd management' method, approved in 2015, that covers activities that improve the herd emission intensity (emissions per unit of product sold) through change in the diet or management. The present study was conducted to compare the required ACCU or supplement prices for a 2% return on capital when feeding a low or high supplement concentration to breeding stock of either (1) urea, (2) three different forms of NO3 or (3) cottonseed meal (CSM), at N concentrations equivalent to 25 or 50 g urea/animal equivalent, to fasten steer entry to a feedlot (backgrounding), in a typical breeder herd on the coastal speargrass land types in central Queensland. Monte Carlo simulations were run using the software @risk, with probability functions used for (1) urea, NO3 and CSM prices, (2) GHG mitigation, (3) livestock prices and (4) carbon price. Increasing the weight of steers at a set turnoff month by feeding CSM was found to be the most cost-effective option, with or without including the offset income. The required ACCU prices for a 2% return on capital were an order of magnitude higher than were indicative carbon prices in 2015 for the three forms of NO3. The likely costs of participating in ERF projects would reduce the return on capital for all mitigation options. © CSIRO 2016.
Resumo:
People, animals and the environment can be exposed to multiple chemicals at once from a variety of sources, but current risk assessment is usually carried out based on one chemical substance at a time. In human health risk assessment, ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, a wide group of fungal secondary metabolites that are known to potentially cause toxicity and carcinogenic outcomes. Mycotoxins are commonly found in a variety of foods including those intended for consumption by infants and young children and have been found in processed cereal-based foods available in the Portuguese market. The use of mathematical models, including probabilistic approaches using Monte Carlo simulations, constitutes a prominent issue in human health risk assessment in general and in mycotoxins exposure assessment in particular. The present study aims to characterize, for the first time, the risk associated with the exposure of Portuguese children to single and multiple mycotoxins present in processed cereal-based foods (CBF). Portuguese children (0-3 years old) food consumption data (n=103) were collected using a 3 days food diary. Contamination data concerned the quantification of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) were evaluated in 20 CBF samples marketed in 2014 and 2015 in Lisbon; samples were analyzed by HPLC-FLD, LC-MS/MS and GC-MS. Daily exposure of children to mycotoxins was performed using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) to the aflatoxin exposure. The magnitude of the MoE gives an indication of the risk level. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (ratio between exposure and a reference dose, HQ). For the cumulative risk assessment of multiple mycotoxins, the concentration addition (CA) concept was used. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. 71% of CBF analyzed samples were contaminated with mycotoxins (with values below the legal limits) and approximately 56% of the studied children consumed CBF at least once in these 3 days. Preliminary results showed that children exposure to single mycotoxins present in CBF were below the TDI. Aflatoxins MoE and MoET revealed a reduced potential risk by exposure through consumption of CBF (with values around 10000 or more). HQ and HI values for the remaining mycotoxins were below 1. Children are a particularly vulnerable population group to food contaminants and the present results point out an urgent need to establish legal limits and control strategies regarding the presence of multiple mycotoxins in children foods in order to protect their health. The development of packaging materials with antifungal properties is a possible solution to control the growth of moulds and consequently to reduce mycotoxin production, contributing to guarantee the quality and safety of foods intended for children consumption.
Resumo:
The response of zooplankton assemblages to variations in the water quality of four man-made lakes, caused by eutrophication and siltation, was investigated by means of canonical correspondence analysis. Monte Carlo simulations using the CCA eingenvalues as test statistics revealed that changes in zooplankton species composition along the environmental gradients of trophic state and abiogenic turbidity were highly significant. The species Brachionus calyciflorus, Thermocyclops sp. and Argyrodiaptomus sp. were good indicators of eutrophic conditions while the species Brachionus dolabratus, Keratella tropica and Hexarthra sp. were good indicators of high turbidity due to suspended sediments. The rotifer genus Brachionus was the most species-rich taxon, comprising five species which were associated with different environmental conditions. Therefore, we tested whether this genus alone could potentially be a better biological indicator of these environmental gradients than the entire zooplankton assemblages or any other random set of five species. The ordination results show that the five Brachionus species alone did not explain better the observed pattern of environmental variation than most random sets of five species. Therefore, this genus could not be selected as a target taxon for more intensive environmental monitoring as has been previously suggested by Attayde and Bozelli (1998). Overall, our results show that changes in the water quality of man-made lakes in a tropical semi-arid region have significant effects on the structure of zooplankton assemblages that can potentially affect the functioning of these ecosystems
Resumo:
A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.