973 resultados para Short Loadlength, Fast Algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fatigue life in metals is predicted utilizing regression analysis of large sets of experimental data, thus representing the material’s macroscopic response. Furthermore, a high variability in the short crack growth (SCG) rate has been observed in polycrystalline materials, in which the evolution and distributionof local plasticity is strongly influenced by the microstructure features. The present work serves to (a) identify the relationship between the crack driving force based on the local microstructure in the proximity of the crack-tip and (b) defines the correlation between scatter observed in the SCG rates to variability in the microstructure. A crystal plasticity model based on the fast Fourier transform formulation of the elasto-viscoplastic problem (CP-EVP-FFT) is used, since the ability to account for the both elastic and plastic regime is critical in fatigue. Fatigue is governed by slip irreversibility, resulting in crack growth, which starts to occur during local elasto-plastic transition. To investigate the effects of microstructure variability on the SCG rate, sets of different microstructure realizations are constructed, in which cracks of different length are introduced to mimic quasi-static SCG in engineering alloys. From these results, the behavior of the characteristic variables of different length scale are analyzed: (i) Von Mises stress fields (ii) resolved shear stress/strain in the pertinent slip systems, and (iii) slip accumulation/irreversibilities. Through fatigue indicator parameters (FIP), scatter within the SCG rates is related to variability in the microstructural features; the results demonstrate that this relationship between microstructure variability and uncertainty in fatigue behavior is critical for accurate fatigue life prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behaviour of a polymer depends strongly on the length- and time scale as well as on the temperature rnat which it is probed. In this work, I describe investigations of polymer surfaces using scanning probe rnmicroscopy with heatable probes. With these probes, surfaces can be heated within seconds down to rnmicroseconds. I introduce experiments for the local and fast determination of glass transition and melting rntemperatures. I developed a method which allows the determination of glass transition and melting rntemperatures on films with thicknesses below 100 nm: A background measurement on the substrate was rnperformed. The resulting curve was subtracted from the measurement on the polymer film. The rndifferential measurement on polystyrene films with thicknesses between 35 nm and 160 nm showed rncharacteristic signals at 95 ± 1 °C, in accordance with the glass transition of polystyrene. Pressing heated rnprobes into polymer films causes plastic deformation. Nanometer sized deformations are currently rninvestigated in novel concepts for high density data storage. A suitable medium for such a storage system rnhas to be easily indentable on one hand, but on the other hand it also has to be very stable towards rnsurface induced wear. For developing such a medium I investigated a new approach: A comparably soft rnmaterial, namely polystyrene, was protected with a thin but very hard layer made of plasma polymerized rnnorbornene. The resulting bilayered media were tested for surface stability and deformability. I showed rnthat the bilayered material combines the deformability of polystyrene with the surface stability of the rnplasma polymer, and that the material therefore is a very good storage medium. In addition we rninvestigated the glass transition temperature of polystyrene at timescales of 10 µs and found it to be rnapprox. 220 °C. The increase of this characteristic temperature of the polymer results from the short time rnat which the polymer was probed and reflects the well-known time-temperature superposition principle. rnHeatable probes were also used for the characterization of silverazide filled nanocapsules. The use of rnheatable probes allowed determining the decomposition temperature of the capsules from few rnnanograms of material. The measured decomposition temperatures ranged from 180 °C to 225 °C, in rnaccordance with literature values. The investigation of small amounts of sample was necessary due to the rnlimited availability of the material. Furthermore, investigating larger amounts of the capsules using rnconventional thermal gravimetric analysis could lead to contamination or even damage of the instrument. rnBesides the analysis of material parameters I used the heatable probes for the local thermal rndecomposition of pentacene precursor material in order to form nanoscale conductive structures. Here, rnthe thickness of the precursor layer was important for complete thermal decomposition. rnAnother aspect of my work was the investigation of redox active polymers - Poly-10-(4-vinylbenzyl)-10H-rnphenothiazine (PVBPT)- for data storage. Data is stored by changing the local conductivity of the material rnby applying a voltage between tip and surface. The generated structures were stable for more than 16 h. It rnwas shown that the presence of water is essential for succesfull patterning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years radar sensor networks for localization and tracking in indoor environment have generated more and more interest, especially for anti-intrusion security systems. These networks often use Ultra Wide Band (UWB) technology, which consists in sending very short (few nanoseconds) impulse signals. This approach guarantees high resolution and accuracy and also other advantages such as low price, low power consumption and narrow-band interference (jamming) robustness. In this thesis the overall data processing (done in MATLAB environment) is discussed, starting from experimental measures from sensor devices, ending with the 2D visualization of targets movements over time and focusing mainly on detection and localization algorithms. Moreover, two different scenarios and both single and multiple target tracking are analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After a proper medical history, growth analysis and physical examination of a short child, followed by radiological and laboratory screening, the clinician may decide to perform genetic testing. We propose several clinical algorithms that can be used to establish the diagnosis. GH1 and GHRHR should be tested in children with severe isolated growth hormone deficiency and a positive family history. A multiple pituitary dysfunction can be caused by defects in several genes, of which PROP1 and POU1F1 are most common. GH resistance can be caused by genetic defects in GHR, STAT5B, IGF1, IGFALS, which all have their specific clinical and biochemical characteristics. IGF-I resistance is seen in heterozygous defects of the IGF1R. If besides short stature additional abnormalities are present, these should be matched with known dysmorphic syndromes. If no obvious candidate gene can be determined, a whole genome approach can be taken to check for deletions, duplications and/or uniparental disomies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the analysis of the SI So rotational band contours of jet-cooled 5-methyl-2-hydroxypyrimidine (5M2HP), the enol form of deoxythymine. Unlike thymine, which exhibits a structureless spectrum, the vibronic spectrum of 5M2HP is well structured, allowing us to determine the rotational constants and the methyl group torsional barriers in the S-0 and S-1 states. The 0(0)(0), 6a(0)(1), 6b(0)(1), and 14(0)(1) band contours were measured at 900 MHz (0.03 cm(-1)) resolution using mass-specific two-color resonant two-photon ionization (2C-R2PI) spectroscopy. All four bands are polarized perpendicular to the pyrimidine plane (>90% c type), identifying the S-1 <- S-0 excitation of 5M2HP as a 1n pi* transition. All contours exhibit two methyl rotor subbands that arise from the lowest 5-methyl torsional states 0A '' and 1E ''. The S-0 and S-1 state torsional barriers were extracted from fits to the torsional subbands. The 3-fold barriers are V-3 '' = 13 cm(-1) and V3' = SI cm(-1); the 6-fold barrier contributions V-6 '' and V-6' are in the range of 2-3 cm(-1) and are positive in both states. The changes of A, B, and C rotational constants upon S-1 <- S-0 excitation were extracted from the contours and reflect an "anti-quinoidal" distortion. The 0(0)(0) contour can only be simulated if a 3 GHz Lorentzian line shape is included, which implies that the S-1(1n pi*) lifetime is similar to 55 ps. For the 6a(0)(1) and 6b(0)(1) bands, the Lorentzian component increases to 5.5 GHz, reflecting a lifetime decrease to similar to 30 ps. The short lifetimes are consistent with the absence of fluorescence from the 1n pi* state. Combining these measurements with the previous observation of efficient intersystem crossing (ISC) from the Si state to a long-lived T-1((3)n pi*) state that lies similar to 2200 cm(-1) below [S. Lobsiger, S. et al. Phys. Chem. Chem. Phys. 2010, 12, 5032] implies that the broadening arises from fast intersystem crossing with k(ISC) approximate to 2 x 10(10) s(-1). In comparison to 5-methylpyrimidine, the ISC rate is enhanced by at least 10 000 by the additional hydroxy group in position 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work electrophoretically mediated micro-analysis (EMMA) is used in conjunction with short end injection to improve the in-capillary Jaffé assay for creatinine. Key advances over prior work include (i) using simulation to ensure intimate overlap of reagent plugs, (ii) using OH- to drive the reaction, (iii) using short-end injection to minimize analysis time and in-line product degradation. The potential-driven overlapping time with the EMMA approach, as well as the borate buffer background electrolyte (BGE) concentration and pH are optimized with the short end approach. The best conditions for short-end analyses would not have been predicted by the prior long end work, owing to a complex interplay of separation time and product degradation rates. Raw peak areas and flow-adjusted peak areas for the Jaffé reaction product (at 505 nm) are used to assess the sensitivity of the short-end EMMA approach. Optimal overlap conditions depend heavily on local conductivity differences within the reagent zone(s), as these differences cause dramatic voltage field differences, which effect reagent overlap dynamics. Simul 5.0, a dynamic simulation program for capillary electrophoresis (CE) systems, is used to understand the ionic boundaries and profiles that give rise to the experimentally obtained data for EMMA analysis. Overall, fast migration of hydroxide ions from the picrate zone makes difficult reagent overlap. In addition, the challenges associated with the simultaneous overlapping of three reagent zones are considered, and experimental results validate the predictions made by the simulation. With one set of “optimized” conditions including OH- (253 mM) as the third reagent zone the response was linear with creatinine concentration (R2 = 0.998) and reproducible over the clinically relevant range (0.08 to 0.1 mM) of standard creatinine concentrations. An LOD (S/N = 3) of 0.02 mM and LOQ (S/N=10) of 0.08 mM were determined. A significant improvement (43%) in assay sensitivity was obtained compared to prior work that considered only two reagents in the overlap.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Treatment with growth hormone (GH) has become standard practice for replacement in GH-deficient children or pharmacotherapy in a variety of disorders with short stature. However, even today, the reported adult heights achieved often remain below the normal range. In addition, the treatment is expensive and may be associated with long-term risks. Thus, a discussion of the factors relevant for achieving an optimal individual outcome in terms of growth, costs, and risks is required. In the present review, the heterogenous approaches of treatment with GH are discussed, considering the parameters available for an evaluation of the short- and long-term outcomes at different stages of treatment. This discourse introduces the potential of the newly emerging prediction algorithms in comparison to other more conventional approaches for the planning and evaluation of the response to GH. In rare disorders such as those with short stature, treatment decisions cannot easily be deduced from personal experience. An interactive approach utilizing the derived experience from large cohorts for the evaluation of the individual patient and the required decision-making may facilitate the use of GH. Such an approach should also lead to avoiding unnecessary long-term treatment in unresponsive individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose was to evaluate the relative glycosaminoglycan (GAG) content of repair tissue in patients after microfracturing (MFX) and matrix-associated autologous chondrocyte transplantation (MACT) of the knee joint with a dGEMRIC technique based on a newly developed short 3D-GRE sequence with two flip angle excitation pulses. Twenty patients treated with MFX or MACT (ten in each group) were enrolled. For comparability, patients from each group were matched by age (MFX: 37.1 +/- 16.3 years; MACT: 37.4 +/- 8.2 years) and postoperative interval (MFX: 33.0 +/- 17.3 months; MACT: 32.0 +/- 17.2 months). The Delta relaxation rate (DeltaR1) for repair tissue and normal hyaline cartilage and the relative DeltaR1 were calculated, and mean values were compared between both groups using an analysis of variance. The mean DeltaR1 for MFX was 1.07 +/- 0.34 versus 0.32 +/- 0.20 at the intact control site, and for MACT, 1.90 +/- 0.49 compared to 0.87 +/- 0.44, which resulted in a relative DeltaR1 of 3.39 for MFX and 2.18 for MACT. The difference between the cartilage repair groups was statistically significant. The new dGEMRIC technique based on dual flip angle excitation pulses showed higher GAG content in patients after MACT compared to MFX at the same postoperative interval and allowed reducing the data acquisition time to 4 min.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a possible source of pickup ions (PUIs) the ribbon observed by the Interstellar Boundary EXplorer (IBEX). We suggest that a gyrating solar wind and PUIs in the ramp and in the near downstream region of the termination shock (TS) could provide a significant source of energetic neutral atoms (ENAs) in the ribbon. A fraction of the solar wind and PUIs are reflected and energized during the first contact with the TS. Some of the solar wind may be reflected propagating toward the Sun but most of the solar wind ions form a gyrating beam-like distribution that persists until it is fully thermalized further downstream. Depending on the strength of the shock, these gyrating distributions can exist for many gyration periods until they are scattered/thermalized due to wave-particle interactions at the TS and downstream in the heliosheath. During this time, ENAs can be produced by charge exchange of interstellar neutral atoms with the gyrating ions. In order to determine the flux of energetic ions, we estimate the solar wind flux at the TS using pressure estimates inferred from in situ measurements. Assuming an average path length in the radial direction of the order of a few AU before the distribution of gyrating ions is thermalized, one can explain a significant fraction of the intensity of ENAs in the ribbon observed by IBEX. With a localized source and such a short integration path, this model would also allow fast time variations of the ENA flux.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^