957 resultados para experimental methods
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
Different types of proteins exist with diverse functions that are essential for living organisms. An important class of proteins is represented by transmembrane proteins which are specifically designed to be inserted into biological membranes and devised to perform very important functions in the cell such as cell communication and active transport across the membrane. Transmembrane β-barrels (TMBBs) are a sub-class of membrane proteins largely under-represented in structure databases because of the extreme difficulty in experimental structure determination. For this reason, computational tools that are able to predict the structure of TMBBs are needed. In this thesis, two computational problems related to TMBBs were addressed: the detection of TMBBs in large datasets of proteins and the prediction of the topology of TMBB proteins. Firstly, a method for TMBB detection was presented based on a novel neural network framework for variable-length sequence classification. The proposed approach was validated on a non-redundant dataset of proteins. Furthermore, we carried-out genome-wide detection using the entire Escherichia coli proteome. In both experiments, the method significantly outperformed other existing state-of-the-art approaches, reaching very high PPV (92%) and MCC (0.82). Secondly, a method was also introduced for TMBB topology prediction. The proposed approach is based on grammatical modelling and probabilistic discriminative models for sequence data labeling. The method was evaluated using a newly generated dataset of 38 TMBB proteins obtained from high-resolution data in the PDB. Results have shown that the model is able to correctly predict topologies of 25 out of 38 protein chains in the dataset. When tested on previously released datasets, the performances of the proposed approach were measured as comparable or superior to the current state-of-the-art of TMBB topology prediction.
Resumo:
The research field of the Thesis is the evaluation of motor variability and the analysis of motor stability for the assessment of fall risk. Since many falls occur during walking, a better understanding of motor stability could lead to the definition of a reliable fall risk index aiming at measuring and assessing the risk of fall in the elderly, in the attempt to prevent traumatic events. Several motor variability and stability measures are proposed in the literature, but still a proper methodological characterization is lacking. Moreover, the relationship between many of these measures and fall history or fall risk is still unknown, or not completely clear. The aim of this thesis is hence to: i) analyze the influence of experimental implementation parameters on variability/stability measures and understand how variations in these parameters affect the outputs; ii) assess the relationship between variability/stability measures and long- short-term fall history. Several implementation issues have been addressed. Following the need for a methodological standardization of gait variability/stability measures, highlighted in particular for orbital stability analysis through a systematic review, general indications about implementation of orbital stability analysis have been showed, together with an analysis of the number of strides and the test-retest reliability of several variability/stability numbers. Indications about the influence of directional changes on measures have been provided. The association between measures and long/short-term fall history has also been assessed. Of all the analyzed variability/stability measures, Multiscale entropy and Recurrence quantification analysis demonstrated particularly good results in terms of reliability, applicability and association with fall history. Therefore, these measures should be taken in consideration for the definition of a fall risk index.
Resumo:
The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.
Resumo:
This thesis studies molecular dynamics simulations on two levels of resolution: the detailed level of atomistic simulations, where the motion of explicit atoms in a many-particle system is considered, and the coarse-grained level, where the motion of superatoms composed of up to 10 atoms is modeled. While atomistic models are capable of describing material specific effects on small scales, the time and length scales they can cover are limited due to their computational costs. Polymer systems are typically characterized by effects on a broad range of length and time scales. Therefore it is often impossible to atomistically simulate processes, which determine macroscopic properties in polymer systems. Coarse-grained (CG) simulations extend the range of accessible time and length scales by three to four orders of magnitude. However, no standardized coarse-graining procedure has been established yet. Following the ideas of structure-based coarse-graining, a coarse-grained model for polystyrene is presented. Structure-based methods parameterize CG models to reproduce static properties of atomistic melts such as radial distribution functions between superatoms or other probability distributions for coarse-grained degrees of freedom. Two enhancements of the coarse-graining methodology are suggested. Correlations between local degrees of freedom are implicitly taken into account by additional potentials acting between neighboring superatoms in the polymer chain. This improves the reproduction of local chain conformations and allows the study of different tacticities of polystyrene. It also gives better control of the chain stiffness, which agrees perfectly with the atomistic model, and leads to a reproduction of experimental results for overall chain dimensions, such as the characteristic ratio, for all different tacticities. The second new aspect is the computationally cheap development of nonbonded CG potentials based on the sampling of pairs of oligomers in vacuum. Static properties of polymer melts are obtained as predictions of the CG model in contrast to other structure-based CG models, which are iteratively refined to reproduce reference melt structures. The dynamics of simulations at the two levels of resolution are compared. The time scales of dynamical processes in atomistic and coarse-grained simulations can be connected by a time scaling factor, which depends on several specific system properties as molecular weight, density, temperature, and other components in mixtures. In this thesis the influence of molecular weight in systems of oligomers and the situation in two-component mixtures is studied. For a system of small additives in a melt of long polymer chains the temperature dependence of the additive diffusion is predicted and compared to experiments.
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
In der vorliegenden Dissertation werden die Kernreaktionen 25Mg(alpha,n)28Si, 26Mg(alpha,n)29Si und 18O(alpha,n)21Ne im astrophysikalisch interessanten Energiebereich von E alpha = 1000 keV bis E alpha = 2450 keV untersucht.rnrnDie Experimente wurden am Nuclear Structure Laboratory der University of Notre Dame (USA) mit dem vor Ort befindlichen Van-de-Graaff Beschleuniger KN durchgeführt. Hierbei wurden Festkörpertargets mit evaporiertem Magnesium oder anodisiertem Sauerstoff mit alpha-Teilchen beschossen und die freigesetzten Neutronen untersucht. Zum Nachweis der freigesetzten Neutronen wurde mit Hilfe von Computersimulationen ein Neutrondetektor basierend auf rn3He-Zählrohren konstruiert. Weiterhin wurden aufgrund des verstärkten Auftretens von Hintergrundreaktionen verschiedene Methoden zur Datenanalyse angewendet.rnrnAbschliessend wird mit Hilfe von Netzwerkrechnungen der Einfluss der Reaktionen 25Mg(alpha,n)28Si, 26Mg(alpha,n)29Si und 18O(alpha,n)21Ne auf die stellare Nukleosynthese untersucht.rn
Resumo:
Environmental decay in porous masonry materials, such as brick and mortar, is a widespread problem concerning both new and historic masonry structures. The decay mechanisms are quite complex dependng upon several interconnected parameters and from the interaction with the specific micro-climate. Materials undergo aesthetical and substantial changes in character but while many studies have been carried out, the mechanical aspect has been largely understudied while it bears true importance from the structural viewpoint. A quantitative assessment of the masonry material degradation and how it affects the load-bearing capacity of masonry structures appears missing. The research work carried out, limiting the attention to brick masonry addresses this issue through an experimental laboratory approach via different integrated testing procedures, both non-destructive and mechanical, together with monitoring methods. Attention was focused on transport of moisture and salts and on the damaging effects caused by the crystallization of two different salts, sodium chloride and sodium sulphate. Many series of masonry specimens, very different in size and purposes were used to track the damage process since its beginning and to monitor its evolution over a number of years Athe same time suitable testing techniques, non-destructive, mini-invasive, analytical, of monitoring, were validated for these purposes. The specimens were exposed to different aggressive agents (in terms of type of salt, of brine concentration, of artificial vs. open-air natural ageing, …), tested by different means (qualitative vs. quantitative, non destructive vs. mechanical testing, punctual vs. wide areas, …), and had different size (1-, 2-, 3-header thick walls, full-scale walls vs. small size specimens, brick columns and triplets vs. small walls, masonry specimens vs. single units of brick and mortar prisms, …). Different advanced testing methods and novel monitoring techniques were applied in an integrated holistic approach, for quantitative assessment of masonry health state.
Resumo:
In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.
Resumo:
Diese Dissertation demonstriert und verbessert die Vorhersagekraft der Coupled-Cluster-Theorie im Hinblick auf die hochgenaue Berechnung von Moleküleigenschaften. Die Demonstration erfolgt mittels Extrapolations- und Additivitätstechniken in der Single-Referenz-Coupled-Cluster-Theorie, mit deren Hilfe die Existenz und Struktur von bisher unbekannten Molekülen mit schweren Hauptgruppenelementen vorhergesagt wird. Vor allem am Beispiel von cyclischem SiS_2, einem dreiatomigen Molekül mit 16 Valenzelektronen, wird deutlich, dass die Vorhersagekraft der Theorie sich heutzutage auf Augenhöhe mit dem Experiment befindet: Theoretische Überlegungen initiierten eine experimentelle Suche nach diesem Molekül, was schließlich zu dessen Detektion und Charakterisierung mittels Rotationsspektroskopie führte. Die Vorhersagekraft der Coupled-Cluster-Theorie wird verbessert, indem eine Multireferenz-Coupled-Cluster-Methode für die Berechnung von Spin-Bahn-Aufspaltungen erster Ordnung in 2^Pi-Zuständen entwickelt wird. Der Fokus hierbei liegt auf Mukherjee's Variante der Multireferenz-Coupled-Cluster-Theorie, aber prinzipiell ist das vorgeschlagene Berechnungsschema auf alle Varianten anwendbar. Die erwünschte Genauigkeit beträgt 10 cm^-1. Sie wird mit der neuen Methode erreicht, wenn Ein- und Zweielektroneneffekte und bei schweren Elementen auch skalarrelativistische Effekte berücksichtigt werden. Die Methode eignet sich daher in Kombination mit Coupled-Cluster-basierten Extrapolations-und Additivitätsschemata dafür, hochgenaue thermochemische Daten zu berechnen.
Resumo:
This paperwork compares the a numerical validation of the finite element model (FEM) with respect the experimental tests of a new generation wind turbine blade designed by TPI Composites Inc. called BSDS (Blade System Design Study). The research is focused on the analysis by finite element (FE) of the BSDS blade and its comparison with respect the experimental data from static and dynamic investigations. The goal of the research is to create a general procedure which is based on a finite element model and will be used to create an accurate digital copy for any kind of blade. The blade prototype was created in SolidWorks and the blade of Sandia National Laboratories Blade System Design Study was accurately reproduced. At a later stage the SolidWorks model was imported in Ansys Mechanical APDL where the shell geometry was created and modal, static and fatigue analysis were carried out. The outcomes of the FEM analysis were compared with the real test on the BSDS blade at Clarkson University laboratory carried out by a new procedures called Blade Test Facility that includes different methods for both the static and dynamic test of the wind turbine blade. The outcomes from the FEM analysis reproduce the real behavior of the blade subjected to static loads in a very satisfying way. A most detailed study about the material properties could improve the accuracy of the analysis.
Resumo:
Background There is concern that non-inferiority trials might be deliberately designed to conceal that a new treatment is less effective than a standard treatment. In order to test this hypothesis we performed a meta-analysis of non-inferiority trials to assess the average effect of experimental treatments compared with standard treatments. Methods One hundred and seventy non-inferiority treatment trials published in 121 core clinical journals were included. The trials were identified through a search of PubMed (1991 to 20 February 2009). Combined relative risk (RR) from meta-analysis comparing experimental with standard treatments was the main outcome measure. Results The 170 trials contributed a total of 175 independent comparisons of experimental with standard treatments. The combined RR for all 175 comparisons was 0.994 [95% confidence interval (CI) 0.978–1.010] using a random-effects model and 1.002 (95% CI 0.996–1.008) using a fixed-effects model. Of the 175 comparisons, experimental treatment was considered to be non-inferior in 130 (74%). The combined RR for these 130 comparisons was 0.995 (95% CI 0.983–1.006) and the point estimate favoured the experimental treatment in 58% (n = 76) and standard treatment in 42% (n = 54). The median non-inferiority margin (RR) pre-specified by trialists was 1.31 [inter-quartile range (IQR) 1.18–1.59]. Conclusion In this meta-analysis of non-inferiority trials the average RR comparing experimental with standard treatments was close to 1. The experimental treatments that gain a verdict of non-inferiority in published trials do not appear to be systematically less effective than the standard treatments. Importantly, publication bias and bias in the design and reporting of the studies cannot be ruled out and may have skewed the study results in favour of the experimental treatments. Further studies are required to examine the importance of such bias.
Resumo:
ABSTRACT : INTRODUCTION : V2-receptor (V2R) stimulation potentially aggravates sepsis-induced vasodilation, fluid accumulation and microvascular thrombosis. Therefore, the present study was performed to determine the effects of a first-line therapy with the selective V2R-antagonist (Propionyl1-D-Tyr(Et)2-Val4-Abu6-Arg8,9)-Vasopressin on cardiopulmonary hemodynamics and organ function vs. the mixed V1aR/V2R-agonist arginine vasopressin (AVP) or placebo in an established ovine model of septic shock. METHODS : After the onset of septic shock, chronically instrumented sheep were randomly assigned to receive first-line treatment with the selective V2R-antagonist (1 g/kg per hour), AVP (0.05 g/kg per hour), or normal saline (placebo, each n = 7). In all groups, open-label norepinephrine was additionally titrated up to 1 g/kg per minute to maintain mean arterial pressure at 70 ± 5 mmHg, if necessary. RESULTS : Compared to AVP- and placebo-treated animals, the selective V2R-antagonist stabilized cardiopulmonary hemodynamics (mean arterial and pulmonary artery pressure, cardiac index) as effectively and increased intravascular volume as suggested by higher cardiac filling pressures. Furthermore, left ventricular stroke work index was higher in the V2R-antagonist group than in the AVP group. Notably, metabolic (pH, base excess, lactate concentrations), liver (transaminases, bilirubin) and renal (creatinine and blood urea nitrogen plasma levels, urinary output, creatinine clearance) dysfunctions were attenuated by the V2R-antagonist when compared with AVP and placebo. The onset of septic shock was associated with an increase in AVP plasma levels as compared to baseline in all groups. Whereas AVP plasma levels remained constant in the placebo group, infusion of AVP increased AVP plasma levels up to 149 ± 21 pg/mL. Notably, treatment with the selective V2R-antagonist led to a significant decrease of AVP plasma levels as compared to shock time (P < 0.001) and to both other groups (P < 0.05 vs. placebo; P < 0.001 vs. AVP). Immunohistochemical analyses of lung tissue revealed higher hemeoxygenase-1 (vs. placebo) and lower 3-nitrotyrosine concentrations (vs. AVP) in the V2R-antagonist group. In addition, the selective V2R-antagonist slightly prolonged survival (14 ± 1 hour) when compared to AVP (11 ± 1 hour, P = 0.007) and placebo (11 ± 1 hour, P = 0.025). CONCLUSIONS : Selective V2R-antagonism may represent an innovative therapeutic approach to attenuate multiple organ dysfunction in early septic shock.
Resumo:
The advantages, limitations and potential applications of available methods for studying erosion of enamel and dentine are reviewed. Special emphasis is placed on the influence of histological differences between the dental hard tissue and the stage of the erosive lesion. No method is suitable for all stages of the lesion. Factors determining the applicability of the methods are: surface condition of the specimen, type of the experimental model, nature of the lesion, need for longitudinal measurements and type of outcome. The most suitable and most widely used methods are: chemical analyses of mineral release and enamel surface hardness for early erosion, and surface profilometry and microradiography for advanced erosion. Morphological changes in eroded dental tissue have usually been characterised by scanning electron microscopy. Novel methods have also been used, but little is known of their potential and limitations. Therefore, there is a need for their further development, evaluation, consolidation and, in particular, validation.
Resumo:
Background: Distraction of the periosteum results in the formation of new bone in the gap between the periosteum and the original bone. We postulate that the use of a barrier membrane would be beneficial for new bone formation in periosteal distraction. Methods: To selectively influence the contribution of the periosteum, a distraction plate with perforations was used alone or covered by a collagen barrier membrane. All animals were subjected to a 7-day latency period and a 10-day distraction period with a rate of 0.1 mm/day. Four animals per group with or without a barrier membrane were sacrificed at 2, 4, and 6 weeks after the end of the distraction. The height of new bone generated relative to the areas bound by the parent bone and the periosteum was determined by histomorphometric methods. Results: New bone was found in all groups. At the periphery of the distraction plate, significant differences in bone height were found between the hinge and the distraction screw for the group without barrier membrane at 2 weeks (0.39 ± 0.19 mm) compared to 4 weeks (0.84 ± 0.44 mm; P = 0.002) and 6 weeks (1.06 ± 0.39 mm; P = 0.004). Differences in maximum bone height with and without a barrier membrane were observed laterally to the distraction plate at 2 weeks (1.22 ± 0.64 versus 0.55 ± 0.14 mm; P = 0.019) and 6 weeks (1.61 ± 0.56 versus 0.73 ± 0.33 mm; P = 0.003) of the consolidation period. Conclusion: Within the limitations of the present study, the application of a barrier membrane may be considered beneficial for new bone formation induced by periosteal distraction.