903 resultados para planning of experiments


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among the scientific objectives addressed by the Radio Science Experiment hosted on board the ESA mission BepiColombo is the retrieval of the rotational state of planet Mercury. In fact, the estimation of the obliquity and the librations amplitude were proven to be fundamental for constraining the interior composition of Mercury. This is accomplished by the Mercury Orbiter Radio science Experiment (MORE) via a strict interaction among different payloads thus making the experiment particularly challenging. The underlying idea consists in capturing images of the same landmark on the surface of the planet in different epochs in order to observe a displacement of the identified features with respect to a nominal rotation which allows to estimate the rotational parameters. Observations must be planned accurately in order to obtain image pairs carrying the highest information content for the following estimation process. This is not a trivial task especially in light of the several dynamical constraints involved. Another delicate issue is represented by the pattern matching process between image pairs for which the lowest correlation errors are desired. The research activity was conducted in the frame of the MORE rotation experiment and addressed the design and implementation of an end-to-end simulator of the experiment with the final objective of establishing an optimal science planning of the observations. In the thesis, the implementation of the singular modules forming the simulator is illustrated along with the simulations performed. The results obtained from the preliminary release of the optimization algorithm are finally presented although the software implemented is only at a preliminary release and will be improved and refined in the future also taking into account the developments of the mission.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurden die bioinformatischen Methoden der Homologie-Modellierung und Molekularen Modellierung dazu benutzt, die dreidimensionalen Strukturen verschiedenster Proteine vorherzusagen und zu analysieren. Experimentelle Befunde aus Laborversuchen wurden dazu benutzt, die Genauigkeit der Homologie-Modelle zu erhöhen. Die Ergebnisse aus den Modellierungen wurden wiederum dazu benutzt, um neue experimentelle Versuche vorzuschlagen. Anhand der erstellten Modelle und bekannten Kristallstrukturen aus der Protein-Datenbank PDB wurde die Struktur-Funktionsbeziehung verschiedener Tyrosinasen untersucht. Dazu gehörten sowohl die Tyrosinase des Bakteriums Streptomyces als auch die Tyrosinase der Hausmaus. Aus den vergleichenden Strukturanalysen der Tyrosinasen resultierten Mechanismen für die Monophenolhydroxylase-Aktivität der Tyrosinasen sowie für den Import der Kupferionen ins aktive Zentrum. Es konnte der Beweis geführt werden, daß die Blockade des CuA-Zentrums tatsächlich der Grund für die unterschiedliche Aktivität von Tyrosinasen und Catecholoxidasen ist. Zum ersten Mal konnte mit der Maus-Tyrosinase ein vollständiges Strukturmodell einer Säugetier-Tyrosinase erstellt werden, das dazu in der Lage ist, die Mechanismen bekannter Albino-Mutationen auf molekularer Ebene zu erklären. Die auf der Basis des ermittelten 3D-Modells gewonnenen Erkenntnisse über die Wichtigkeit bestimmter Aminosäuren für die Funktion wurde durch gerichtete Mutagenese an der rekombinant hergestellten Maus-Tyrosinase getestet und bestätigt. Weiterhin wurde die Struktur der Tyrosinase des Krebses Palinurus elephas durch eine niedrigaufgelöste 3D-Rekonstruktion aus elektronenmikroskopischen Bildern aufgeklärt. Der zweite große Themenkomplex umfasst die Strukturanalyse der Lichtsammlerkomplexe LHCI-730 und LHCII. Im Falle des LHCII konnte der Oligomerisierungszustand der LHCMoleküle mit diskreten Konformationen des N-Terminus korreliert werden. Auch hier kam eine Kombination von Homologie-Modellierung und einer experimentellen Methode, der Elektronen-Spin-Resonanz-Messung, zum Einsatz. Die Änderung des Oligomerisierungszustands des LHCII kontrolliert den Energiezufluß zu den Photosystemen PS I und PS II. Des Weiteren wurde ein vollständiges Modell des LHCI-730 erstellt, um die Auswirkungen gerichteter Mutagenese auf das Dimerisierungsverhalten zu untersuchen. Auf Basis dieses Modells wurden die Wechselwirkungen zwischen den Monomeren Lhca1 und Lhca4 evaluiert und potentielle Bindungspartner identifiziert.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Topic of this thesis is the development of experiments behind the gas-filled separator TASCA(TransActinide Separator and Chemistry Apparatus) to study the chemical properties of the transactinide elements.rnIn the first part of the thesis, the electrodepositions of short-lived isotopes of ruthenium and osmium on gold electrodes were studied as model experiments for hassium. From literature it is known that the deposition potential of single atoms differs significantly from the potential predicted by the Nernst equation. This shift of the potential depends on the adsorption enthalpy of therndeposited element on the electrode material. If the adsorption on the electrode-material is favoured over the adsorption on a surface made of the same element as the deposited atom, the electrode potential is shifted to higher potentials. This phenomenon is called underpotential deposition.rnPossibilities to automatize an electro chemistry experiment behind the gas-filled separator were explored for later studies with transactinide elements.rnThe second part of this thesis is about the in-situ synthesis of transition-metal-carbonyl complexes with nuclear reaction products. Fission products of uranium-235 and californium-249 were produced at the TRIGA Mainz reactor and thermalized in a carbon-monoxide containing atmosphere. The formed volatile metal-carbonyl complexes could be transported in a gas-stream.rnFurthermore, short-lived isotopes of tungsten, rhenium, osmium, and iridium were synthesised at the linear accelerator UNILAC at GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt. The recoiling fusion products were separated from the primary beam and the transfer products in the gas-filled separator TASCA. The fusion products were stopped in the focal plane of TASCA in a recoil transfer chamber. This chamber contained a carbon-monoxide – helium gas mixture. The formed metal-carbonyl complexes could be transported in a gas stream to various experimental setups. All synthesised carbonyl complexes were identified by nuclear decay spectroscopy. Some complexes were studied with isothermal chromatography or thermochromatography methods. The chromatograms were compared with Monte Carlo Simulations to determine the adsorption enthalpyrnon silicon dioxide and on gold. These simulations based on existing codes, that were modified for the different geometries of the chromatography channels. All observed adsorption enthalpies (on silcon oxide as well as on gold) are typical for physisorption. Additionally, the thermalstability of some of the carbonyl complexes was studied. This showed that at temperatures above 200 °C therncomplexes start to decompose.rnIt was demonstrated that carbonyl-complex chemistry is a suitable method to study rutherfordium, dubnium, seaborgium, bohrium, hassium, and meitnerium. Until now, only very simple, thermally stable compounds have been synthesized in the gas-phase chemistry of the transactindes. With the synthesis of transactinide-carbonyl complexes a new compound class would be discovered. Transactinide chemistry would reach the border between inorganic and metallorganic chemistry.rnFurthermore, the in-situ synthesised carbonyl complexes would allow nuclear spectroscopy studies under low background conditions making use of chemically prepared samples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sweet sorghum, a C4 crop of tropical origin, is gaining momentum as a multipurpose feedstock to tackle the growing environmental, food and energy security demands. Under temperate climates sweet sorghum is considered as a potential bioethanol feedstock, however, being a relatively new crop in such areas its physiological and metabolic adaptability has to be evaluated; especially to the more frequent and severe drought spells occurring throughout the growing season and to the cold temperatures during the establishment period of the crop. The objective of this thesis was to evaluate some adaptive photosynthetic traits of sweet sorghum to drought and cold stress, both under field and controlled conditions. To meet such goal, a series of experiments were carried out. A new cold-tolerant sweet sorghum genotype was sown in rhizotrons of 1 m3 in order to evaluate its tolerance to progressive drought until plant death at young and mature stages. Young plants were able to retain high photosynthetic rate for 10 days longer than mature plants. Such response was associated to the efficient PSII down-regulation capacity mediated by light energy dissipation, closure of reaction centers (JIP-test parameters), and accumulation of glucose and sucrose. On the other hand, when sweet sorghum plants went into blooming stage, neither energy dissipation nor sugar accumulation counteracted the negative effect of drought. Two hybrids with contrastable cold tolerance, selected from an early sowing field trial were subjected to chilling temperatures under controlled growth conditions to evaluate in deep their physiological and metabolic cold adaptation mechanisms. The hybrid which poorly performed under field conditions (ICSSH31), showed earlier metabolic changes (Chl a + b, xanthophyll cycle) and greater inhibition of enzymatic activity (Rubisco and PEPcase activity) than the cold tolerant hybrid (Bulldozer). Important insights on the potential adaptability of sweet sorghum to temperate climates are given.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis deals with numerical algorithms for fluid-structure interaction problems with application in blood flow modelling. It starts with a short introduction on the mathematical description of incompressible viscous flow with non-Newtonian viscosity and a moving linear viscoelastic structure. The mathematical model consists of the generalized Navier-Stokes equation used for the description of fluid flow and the generalized string model for structure movement. The arbitrary Lagrangian-Eulerian approach is used in order to take into account moving computational domain. A part of the thesis is devoted to the discussion on the non-Newtonian behaviour of shear-thinning fluids, which is in our case blood, and derivation of two non-Newtonian models frequently used in the blood flow modelling. Further we give a brief overview on recent fluid-structure interaction schemes with discussion about the difficulties arising in numerical modelling of blood flow. Our main contribution lies in numerical and experimental study of a new loosely-coupled partitioned scheme called the kinematic splitting fluid-structure interaction algorithm. We present stability analysis for a coupled problem of non-Newtonian shear-dependent fluids in moving domains with viscoelastic boundaries. Here, we assume both, the nonlinearity in convective as well is diffusive term. We analyse the convergence of proposed numerical scheme for a simplified fluid model of the Oseen type. Moreover, we present series of experiments including numerical error analysis, comparison of hemodynamic parameters for the Newtonian and non-Newtonian fluids and comparison of several physiologically relevant computational geometries in terms of wall displacement and wall shear stress. Numerical analysis and extensive experimental study for several standard geometries confirm reliability and accuracy of the proposed kinematic splitting scheme in order to approximate fluid-structure interaction problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: Tumor stage and nuclear grade are the most important prognostic parameters of clear cell renal cell carcinoma (ccRCC). The progression risk of ccRCC remains difficult to predict particularly for tumors with organ-confined stage and intermediate differentiation grade. Elucidating molecular pathways deregulated in ccRCC may point to novel prognostic parameters that facilitate planning of therapeutic approaches. EXPERIMENTAL DESIGN: Using tissue microarrays, expression patterns of 15 different proteins were evaluated in over 800 ccRCC patients to analyze pathways reported to be physiologically controlled by the tumor suppressors von Hippel-Lindau protein and phosphatase and tensin homologue (PTEN). Tumor staging and grading were improved by performing variable selection using Cox regression and a recursive bootstrap elimination scheme. RESULTS: Patients with pT2 and pT3 tumors that were p27 and CAIX positive had a better outcome than those with all remaining marker combinations. A prolonged survival among patients with intermediate grade (grade 2) correlated with both nuclear p27 and cytoplasmic PTEN expression, as well as with inactive, nonphosphorylated ribosomal protein S6. By applying graphical log-linear modeling for over 700 ccRCC for which the molecular parameters were available, only a weak conditional dependence existed between the expression of p27, PTEN, CAIX, and p-S6, suggesting that the dysregulation of several independent pathways are crucial for tumor progression. CONCLUSIONS: The use of recursive bootstrap elimination, as well as graphical log-linear modeling for comprehensive tissue microarray (TMA) data analysis allows the unraveling of complex molecular contexts and may improve predictive evaluations for patients with advanced renal cancer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As part of the primary survey, polytrauma patients in our emergency department are examined using the new 'Lodox Statscan' (LS) digital low-radiation imaging device. The LS provides full-body anterior and lateral views based on enhanced linear slot-scanning technology, in accordance with the recommended Advanced Trauma Life Support (ATLS) Guidelines. This study's objectives were to establish whether LS appropriately rules out peripheral bone injuries and to examine whether LS imaging provides adequate information for the preoperative planning of such lesions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Soil erosion on sloping agricultural land poses a serious problem for the environment, as well as for production. In areas with highly erodible soils, such as those in loess zones, application of soil and water conservation measures is crucial to sustain agricultural yields and to prevent or reduce land degradation. The present study, carried out in Faizabad, Tajikistan, was designed to evaluate the potential of local conservation measures on cropland using a spatial modelling approach to provide decision-making support for the planning of spatially explicit sustainable land use. A sampling design to support comparative analysis between well-conserved units and other field units was established in order to estimate factors that determine water erosion, according to the Revised Universal Soil Loss Equation (RUSLE). Such factor-based approaches allow ready application using a geographic information system (GIS) and facilitate straightforward scenario modelling in areas with limited data resources. The study showed first that assessment of erosion and conservation in an area with inhomogeneous vegetation cover requires the integration of plot-based cover. Plot-based vegetation cover can be effectively derived from high-resolution satellite imagery, providing a useful basis for plot-wise conservation planning. Furthermore, thorough field assessments showed that 25.7% of current total cropland is covered by conservation measures (terracing, agroforestry and perennial herbaceous fodder). Assessment of the effectiveness of these local measures, combined with the RUSLE calculations, revealed that current average soil loss could be reduced through low-cost measures such as contouring (by 11%), fodder plants (by 16%), and drainage ditches (by 53%). More expensive measures such as terracing and agroforestry can reduce erosion by as much as 63% (for agroforestry) and 93% (for agroforestry combined with terracing). Indeed, scenario runs for different levels of tolerable erosion rates showed that more cost-intensive and technologically advanced measures would lead to greater reduction of soil loss. However, given economic conditions in Tajikistan, it seems advisable to support the spread of low-cost and labourextensive measures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Simulation Automation Framework for Experiments (SAFE) is a project created to raise the level of abstraction in network simulation tools and thereby address issues that undermine credibility. SAFE incorporates best practices in network simulationto automate the experimental process and to guide users in the development of sound scientific studies using the popular ns-3 network simulator. My contributions to the SAFE project: the design of two XML-based languages called NEDL (ns-3 Experiment Description Language) and NSTL (ns-3 Script Templating Language), which facilitate the description of experiments and network simulationmodels, respectively. The languages provide a foundation for the construction of better interfaces between the user and the ns-3 simulator. They also provide input to a mechanism which automates the execution of network simulation experiments. Additionally,this thesis demonstrates that one can develop tools to generate ns-3 scripts in Python or C++ automatically from NSTL model descriptions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of the present study was to evaluate the thickness and the anatomic characteristics of the Schneiderian membrane and cortical bone using limited cone beam computed tomography (CBCT) scannning in patients referred for planning of apical surgery of maxillary molars.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Deep brain stimulation (DBS) for Parkinson's disease often alleviates the motor symptoms, but causes cognitive and emotional side effects in a substantial number of cases. Identification of the motor part of the subthalamic nucleus (STN) as part of the presurgical workup could minimize these adverse effects. In this study, we assessed the STN's connectivity to motor, associative, and limbic brain areas, based on structural and functional connectivity analysis of volunteer data. For the structural connectivity, we used streamline counts derived from HARDI fiber tracking. The resulting tracks supported the existence of the so-called "hyperdirect" pathway in humans. Furthermore, we determined the connectivity of each STN voxel with the motor cortical areas. Functional connectivity was calculated based on functional MRI, as the correlation of the signal within a given brain voxel with the signal in the STN. Also, the signal per STN voxel was explained in terms of the correlation with motor or limbic brain seed ROI areas. Both right and left STN ROIs appeared to be structurally and functionally connected to brain areas that are part of the motor, associative, and limbic circuit. Furthermore, this study enabled us to assess the level of segregation of the STN motor part, which is relevant for the planning of STN DBS procedures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To compare diagnostic accuracy of multi-station, high-spatial resolution contrast-enhanced MR angiography (CE-MRA) of the lower extremities with digital subtraction angiography (DSA) as the reference standard in patients with symptomatic peripheral arterial occlusive disease. MATERIALS AND METHODS: Of 485 consecutive patients undergoing a run-off CE-MRA, 152 patients (86 male, 66 female; mean age, 71.6 years) with suspected peripheral arterial occlusive disease were included into our Institutional Review Board approved study. All patients underwent MRA and DSA of the lower extremities within 30 days. MRA was performed at 1.5 Tesla with a single bolus of 0.1 mmol/kg body weight of gadobutrol administered at a rate of 2.0 mL/s at three stations. Two readers evaluated the MRA images independently for stenosis grade and image quality. Sensitivity and specificity were derived. RESULTS: Sensitivity and specificity ranged from 73% to 93% and 64% to 89% and were highest in the thigh area. Both readers showed comparable results. Evaluation of good and better quality MRAs resulted in a considerable improvement in diagnostic accuracy. CONCLUSION: Contrast-enhanced MRA demonstrates good sensitivity and specificity in the investigation of the vasculature of the lower extremities. While a minor investigator experience dependence remains, it is standardizable and shows good inter-observer agreement. Our results confirm that the administration of Gadobutrol at a standard dose of 0.1 mmol/kg for contrast-enhanced runoff MRA is able to detect hemodynamically relevant stenoses. Use of contrast-enhanced MRA as an alternative to intra-arterial DSA in the evaluation and therapeutic planning of patients with suspected peripheral arterial occlusive disease is well justified. J. Magn. Reson. Imaging 2013;37:1427-1435. © 2012 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inspired by research in the field of behavioral economics as well as social psychology, this study aimed to explore if conformity plays a role in the occurrence of herd behavior in the financial market. Participants received one of nine different versions of a survey either online or on paper. They answered questions related to riskiness when making decisions, dependency on others when making decisions, and investment preferences among other questions. In experimental conditions, participants were told the majority of investors, either sixty percent or eighty percent, invested in a certain stock or won a game. It was predicted that individuals would conform to the group behavior in both experimental conditions with the highest level of conformity in the high pressure to conform condition. Results of experiments revealed that when the overwhelming majority of other investors behaved a certain way (80%), participants were more likely to behave that same way. Results of the third experiment supported previous research stating that emotion affects economic decision-making and facilitates herd behavior.