931 resultados para large scale linear system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous efforts have been dedicated to the synthesis of large-volume methacrylate monoliths for large-scale biomolecules purification but most were obstructed by the enormous release of exotherms during preparation, thereby introducing structural heterogeneity in the monolith pore system. A significant radial temperature gradient develops along the monolith thickness, reaching a terminal temperature that supersedes the maximum temperature required for structurally homogenous monoliths preparation. The enormous heat build-up is perceived to encompass the heat associated with initiator decomposition and the heat released from free radical-monomer and monomer-monomer interactions. The heat resulting from the initiator decomposition was expelled along with some gaseous fumes before commencing polymerization in a gradual addition fashion. Characteristics of 80 mL monolith prepared using this technique was compared with that of a similar monolith synthesized in a bulk polymerization mode. An extra similarity in the radial temperature profiles was observed for the monolith synthesized via the heat expulsion technique. A maximum radial temperature gradient of only 4.3°C was recorded at the center and 2.1°C at the monolith peripheral for the combined heat expulsion and gradual addition technique. The comparable radial temperature distributions obtained birthed identical pore size distributions at different radial points along the monolith thickness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an increasing need in biology and clinical medicine to robustly and reliably measure tens-to-hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma, and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and 7 control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to sub-nanogram/mL sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and inter-laboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy isotope labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an inter-laboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality c`ontrol measures, enables sensitive, specific, reproducible and quantitative measurements of proteins and peptides in complex biological matrices such as plasma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* Plant response to drought is complex, so that traits adapted to a specific drought type can confer disadvantage in another drought type. Understanding which type(s) of drought to target is of prime importance for crop improvement. * Modelling was used to quantify seasonal drought patterns for a check variety across the Australian wheatbelt, using 123 yr of weather data for representative locations and managements. Two other genotypes were used to simulate the impact of maturity on drought pattern. * Four major environment types summarized the variability in drought pattern over time and space. Severe stress beginning before flowering was common (44% of occurrences), with (24%) or without (20%) relief during grain filling. High variability occurred from year to year, differing with geographical region. With few exceptions, all four environment types occurred in most seasons, for each location, management system and genotype. * Applications of such environment characterization are proposed to assist breeding and research to focus on germplasm, traits and genes of interest for target environments. The method was applied at a continental scale to highly variable environments and could be extended to other crops, to other drought-prone regions around the world, and to quantify potential changes in drought patterns under future climates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Observational studies indicate that the convective activity of the monsoon systems undergo intraseasonal variations with multi-week time scales. The zone of maximum monsoon convection exhibits substantial transient behavior with successive propagating from the North Indian Ocean to the heated continent. Over South Asia the zone achieves its maximum intensity. These propagations may extend over 3000 km in latitude and perhaps twice the distance in longitude and remain as coherent entities for periods greater than 2-3 weeks. Attempts to explain this phenomena using simple ocean-atmosphere models of the monsoon system had concluded that the interactive ground hydrology so modifies the total heating of the atmosphere that a steady state solution is not possible, thus promoting lateral propagation. That is, the ground hydrology forces the total heating of the atmosphere and the vertical velocity to be slightly out of phase, causing a migration of the convection towards the region of maximum heating. Whereas the lateral scale of the variations produced by the Webster (1983) model were essentially correct, they occurred at twice the frequency of the observed events and were formed near the coastal margin, rather than over the ocean. Webster's (1983) model used to pose the theories was deficient in a number of aspects. Particularly, both the ground moisture content and the thermal inertia of the model were severely underestimated. At the same time, the sea surface temperatures produced by the model between the equator and the model's land-sea boundary were far too cool. Both the atmosphere and the ocean model were modified to include a better hydrological cycle and ocean structure. The convective events produced by the modified model possessed the observed frequency and were generated well south of the coastline. The improved simulation of monsoon variability allowed the hydrological cycle feedback to be generalized. It was found that monsoon variability was constrained to lie within the bounds of a positive gradient of a convective intensity potential (I). The function depends primarily on the surface temperature, the availability of moisture and the stability of the lower atmosphere which varies very slowly on the time scale of months. The oscillations of the monsoon perturb the mean convective intensity potential causing local enhancements of the gradient. These perturbations are caused by the hydrological feedbacks, discussed above, or by the modification of the air-sea fluxes caused by variations of the low level wind during convective events. The final result is the slow northward propagation of convection within an even slower convective regime. The ECMWF analyses show very similar behavior of the convective intensity potential. Although it is considered premature to use the model to conduct simulations of the African monsoon system, the ECMWF analysis indicates similar behavior in the convective intensity potential suggesting, at least, that the same processes control the low frequency structure of the African monsoon. The implications of the hypotheses on numerical weather prediction of monsoon phenomenon are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Critical applications like cyclone tracking and earthquake modeling require simultaneous high-performance simulations and online visualization for timely analysis. Faster simulations and simultaneous visualization enable scientists provide real-time guidance to decision makers. In this work, we have developed an integrated user-driven and automated steering framework that simultaneously performs numerical simulations and efficient online remote visualization of critical weather applications in resource-constrained environments. It considers application dynamics like the criticality of the application and resource dynamics like the storage space, network bandwidth and available number of processors to adapt various application and resource parameters like simulation resolution, simulation rate and the frequency of visualization. We formulate the problem of finding an optimal set of simulation parameters as a linear programming problem. This leads to 30% higher simulation rate and 25-50% lesser storage consumption than a naive greedy approach. The framework also provides the user control over various application parameters like region of interest and simulation resolution. We have also devised an adaptive algorithm to reduce the lag between the simulation and visualization times. Using experiments with different network bandwidths, we find that our adaptive algorithm is able to reduce lag as well as visualize the most representative frames.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parabolized stability equation (PSE) models are being deve loped to predict the evolu-tion of low-frequency, large-scale wavepacket structures and their radiated sound in high-speed turbulent round jets. Linear PSE wavepacket models were previously shown to be in reasonably good agreement with the amplitude envelope and phase measured using a microphone array placed just outside the jet shear layer. 1,2 Here we show they also in very good agreement with hot-wire measurements at the jet center line in the potential core,for a different set of experiments. 3 When used as a model source for acoustic analogy, the predicted far field noise radiation is in reasonably good agreement with microphone measurements for aft angles where contributions from large -scale structures dominate the acoustic field. Nonlinear PSE is then employed in order to determine the relative impor-tance of the mode interactions on the wavepackets. A series of nonlinear computations with randomized initial conditions are use in order to obtain bounds for the evolution of the modes in the natural turbulent jet flow. It was found that n onlinearity has a very limited impact on the evolution of the wavepackets for St≥0. 3. Finally, the nonlinear mechanism for the generation of a low-frequency mode as the difference-frequency mode 4,5 of two forced frequencies is investigated in the scope of the high Reynolds number jets considered in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Daily rainfall datasets of 10 years (1998-2007) of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (similar to 0.9) when the study was confined to specific wet and dry spells each of about 5-8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30-50 days and 10-20 days), to be ranging respectively between similar to 30-40% and 5-10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by similar to 110 mm during southwest monsoon and overestimating by similar to 150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1 degrees x1 degrees grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5 degrees x5 degrees average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EXTRACT (SEE PDF FOR FULL ABSTRACT): Paleoclimatic variations in western North America depend on a hierarchy of temporal and spatial controls that can be examined using a combination of modeling studies and data synthesis. ... The regional vegetation response to large-scale changes in the climate system of the last 21,000 years is used as a conceptual model to help explain earlier vegetation and climate at two localities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame. This is because manually inspecting bridges is a time-consuming and costly task, and some state Departments of Transportation (DOT) cannot afford the essential costs and manpower. In this paper, a novel method that can detect large-scale bridge concrete columns is proposed for the purpose of eventually creating an automated bridge condition assessment system. The method employs image stitching techniques (feature detection and matching, image affine transformation and blending) to combine images containing different segments of one column into a single image. Following that, bridge columns are detected by locating their boundaries and classifying the material within each boundary in the stitched image. Preliminary test results of 114 concrete bridge columns stitched from 373 close-up, partial images of the columns indicate that the method can correctly detect 89.7% of these elements, and thus, the viability of the application of this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manually inspecting bridges is a time-consuming and costly task. There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame as some state DOTs cannot afford the essential costs and manpower. This paper presents a novel method that can detect bridge concrete columns from visual data for the purpose of eventually creating an automated bridge condition assessment system. The method employs SIFT feature detection and matching to find overlapping areas among images. Affine transformation matrices are then calculated to combine images containing different segments of one column into a single image. Following that, the bridge columns are detected by identifying the boundaries in the stitched image and classifying the material within each boundary. Preliminary test results using real bridge images indicate that most columns in stitched images can be correctly detected and thus, the viability of the application of this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a comparison between theoretical predictions and experimental results from a pin-on-disc test rig exploring friction-induced vibration. The model is based on a linear stability analysis of two systems coupled by sliding contact at a single point. Predictions are compared with a large volume of measured squeal initiations that have been post-processed to extract growth rates and frequencies at the onset of squeal. Initial tests reveal the importance of including both finite contact stiffness and a velocity-dependent dynamic model for friction, giving predictions that accounted for nearly all major clusters of squeal initiations from 0 to 5 kHz. However, a large number of initiations occurred at disc mode frequencies that were not predicted with the same parameters. These frequencies proved remarkably difficult to destabilise, requiring an implausibly high coefficient of friction. An attempt has been made to estimate the dynamic friction behaviour directly from the squeal initiation data, revealing complex-valued frequency-dependent parameters for a new model of linearised dynamic friction. These new parameters readily destabilised the disc modes and provided a consistent model that could account for virtually all initiations from 0 to 15 kHz. The results suggest that instability thresholds for a wide range of squeal-type behaviour can be predicted, but they highlight the central importance of a correct understanding and accurate description of dynamic friction at the sliding interface. © 2013 Elsevier Ltd. All rights reserved.