851 resultados para large-scale systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental and theoretical studies have shown the importance of stochastic processes in genetic regulatory networks and cellular processes. Cellular networks and genetic circuits often involve small numbers of key proteins such as transcriptional factors and signaling proteins. In recent years stochastic models have been used successfully for studying noise in biological pathways, and stochastic modelling of biological systems has become a very important research field in computational biology. One of the challenge problems in this field is the reduction of the huge computing time in stochastic simulations. Based on the system of the mitogen-activated protein kinase cascade that is activated by epidermal growth factor, this work give a parallel implementation by using OpenMP and parallelism across the simulation. Special attention is paid to the independence of the generated random numbers in parallel computing, that is a key criterion for the success of stochastic simulations. Numerical results indicate that parallel computers can be used as an efficient tool for simulating the dynamics of large-scale genetic regulatory networks and cellular processes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traversability maps are a global spatial representation of the relative difficulty in driving through a local region. These maps support simple optimisation of robot paths and have been very popular in path planning techniques. Despite the popularity of these maps, the methods for generating global traversability maps have been limited to using a-priori information. This paper explores the construction of large scale traversability maps for a vehicle performing a repeated activity in a bounded working environment, such as a repeated delivery task.We evaluate the use of vehicle power consumption, longitudinal slip, lateral slip and vehicle orientation to classify the traversability and incorporate this into a map generated from sparse information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Project work can involve multiple people from varying disciplines coming together to solve problems as a group. Large scale interactive displays are presenting new opportunities to support such interactions with interactive and semantically enabled cooperative work tools such as intelligent mind maps. In this paper, we present a novel digital, touch-enabled mind-mapping tool as a first step towards achieving such a vision. This first prototype allows an evaluation of the benefits of a digital environment for a task that would otherwise be performed on paper or flat interactive surfaces. Observations and surveys of 12 participants in 3 groups allowed the formulation of several recommendations for further research into: new methods for capturing text input on touch screens; inclusion of complex structures; multi-user environments and how users make the shift from single- user applications; and how best to navigate large screen real estate in a touch-enabled, co-present multi-user setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale integration of non-inertial generators such as wind farms will create frequency stability issues due to reduced system inertia. Inertia based frequency stability study is important to predict the performance of power system with increased level of renewables. This paper focuses on the impact large-scale wind penetration on frequency stability of the Australian Power Network. MATLAB simulink is used to develop a frequency based dynamic model utilizing the network data from a simplified 14-generator Australian power system. The loss of generation is modeled as the active power disturbance and minimum inertia required to maintain the frequency stability is determined for five-area power system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous efforts have been dedicated to the synthesis of large-volume methacrylate monoliths for large-scale biomolecules purification but most were obstructed by the enormous release of exotherms during preparation, thereby introducing structural heterogeneity in the monolith pore system. A significant radial temperature gradient develops along the monolith thickness, reaching a terminal temperature that supersedes the maximum temperature required for structurally homogenous monoliths preparation. The enormous heat build-up is perceived to encompass the heat associated with initiator decomposition and the heat released from free radical-monomer and monomer-monomer interactions. The heat resulting from the initiator decomposition was expelled along with some gaseous fumes before commencing polymerization in a gradual addition fashion. Characteristics of 80 mL monolith prepared using this technique was compared with that of a similar monolith synthesized in a bulk polymerization mode. An extra similarity in the radial temperature profiles was observed for the monolith synthesized via the heat expulsion technique. A maximum radial temperature gradient of only 4.3°C was recorded at the center and 2.1°C at the monolith peripheral for the combined heat expulsion and gradual addition technique. The comparable radial temperature distributions obtained birthed identical pore size distributions at different radial points along the monolith thickness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an increasing need in biology and clinical medicine to robustly and reliably measure tens-to-hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma, and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and 7 control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to sub-nanogram/mL sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and inter-laboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy isotope labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an inter-laboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality c`ontrol measures, enables sensitive, specific, reproducible and quantitative measurements of proteins and peptides in complex biological matrices such as plasma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Public buildings and large infrastructure are typically monitored by tens or hundreds of cameras, all capturing different physical spaces and observing different types of interactions and behaviours. However to date, in large part due to limited data availability, crowd monitoring and operational surveillance research has focused on single camera scenarios which are not representative of real-world applications. In this paper we present a new, publicly available database for large scale crowd surveillance. Footage from 12 cameras for a full work day covering the main floor of a busy university campus building, including an internal and external foyer, elevator foyers, and the main external approach are provided; alongside annotation for crowd counting (single or multi-camera) and pedestrian flow analysis for 10 and 6 sites respectively. We describe how this large dataset can be used to perform distributed monitoring of building utilisation, and demonstrate the potential of this dataset to understand and learn the relationship between different areas of a building.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moreton Island and several other large siliceous sand dune islands and mainland barrier deposits in SE Queensland represent the distal, onshore component of an extensive Quaternary continental shelf sediment system. This sediment has been transported up to 1000 km along the coast and shelf of SE Australia over multiple glacioeustatic sea-level cycles. Stratigraphic relationships and a preliminary Optically Stimulated Luminance (OSL) chronology for Moreton Island indicate a middle Pleistocene age for the large majority of the deposit. Dune units exposed in the centre of the island and on the east coast have OSL ages that indicate deposition occurred between approximately 540 ka and 350 ka BP, and at around 96±10 ka BP. Much of the southern half of the island has a veneer of much younger sediment, with OSL ages of 0.90±0.11 ka, 1.28±0.16 ka, 5.75±0.53 ka and <0.45 ka BP. The younger deposits were partially derived from the reworking of the upper leached zone of the much older dunes. A large parabolic dune at the northern end of the island, OSL age of 9.90±1.0 ka BP, and palaeosol exposures that extend below present sea level suggest the Pleistocene dunes were sourced from shorelines positioned several to tens of metres lower than, and up to few kilometres seaward of the present shoreline. Given the lower gradient of the inner shelf a few km seaward of the island, it seems likely that periods of intermediate sea level (e.g. ~20 m below present) produced strongly positive onshore sediment budgets and the mobilisation of dunes inland to form much of what now comprises Moreton Island. The new OSL ages and comprehensive OSL chronology for the Cooloola deposit, 100 km north of Moreton Island, indicate that the bulk of the coastal dune deposits in SE Queensland were emplaced between approximately 540 ka BP and prior to the Last Interglacial. This chronostratigraphic information improves our fundamental understanding of long-term sediment transport and accumulation on large-scale continental shelf sediment systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Twitter’s hashtag functionality is now used for a very wide variety of purposes, from covering crises and other breaking news events through gathering an instant community around shared media texts (such as sporting events and TV broadcasts) to signalling emotive states from amusement to despair. These divergent uses of the hashtag are increasingly recognised in the literature, with attention paid especially to the ability for hashtags to facilitate the creation of ad hoc or hashtag publics. A more comprehensive understanding of these different uses of hashtags has yet to be developed, however. Previous research has explored the potential for a systematic analysis of the quantitative metrics that could be generated from processing a series of hashtag datasets. Such research found, for example, that crisis-related hashtags exhibited a significantly larger incidence of retweets and tweets containing URLs than hashtags relating to televised events, and on this basis hypothesised that the information-seeking and -sharing behaviours of Twitter users in such different contexts were substantially divergent. This article updates such study and their methodology by examining the communicative metrics of a considerably larger and more diverse number of hashtag datasets, compiled over the past five years. This provides an opportunity both to confirm earlier findings, as well as to explore whether hashtag use practices may have shifted subsequently as Twitter’s userbase has developed further; it also enables the identification of further hashtag types beyond the “crisis” and “mainstream media event” types outlined to date. The article also explores the presence of such patterns beyond recognised hashtags, by incorporating an analysis of a number of keyword-based datasets. This large-scale, comparative approach contributes towards the establishment of a more comprehensive typology of hashtags and their publics, and the metrics it describes will also be able to be used to classify new hashtags emerging in the future. In turn, this may enable researchers to develop systems for automatically distinguishing newly trending topics into a number of event types, which may be useful for example for the automatic detection of acute crises and other breaking news events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Observational studies indicate that the convective activity of the monsoon systems undergo intraseasonal variations with multi-week time scales. The zone of maximum monsoon convection exhibits substantial transient behavior with successive propagating from the North Indian Ocean to the heated continent. Over South Asia the zone achieves its maximum intensity. These propagations may extend over 3000 km in latitude and perhaps twice the distance in longitude and remain as coherent entities for periods greater than 2-3 weeks. Attempts to explain this phenomena using simple ocean-atmosphere models of the monsoon system had concluded that the interactive ground hydrology so modifies the total heating of the atmosphere that a steady state solution is not possible, thus promoting lateral propagation. That is, the ground hydrology forces the total heating of the atmosphere and the vertical velocity to be slightly out of phase, causing a migration of the convection towards the region of maximum heating. Whereas the lateral scale of the variations produced by the Webster (1983) model were essentially correct, they occurred at twice the frequency of the observed events and were formed near the coastal margin, rather than over the ocean. Webster's (1983) model used to pose the theories was deficient in a number of aspects. Particularly, both the ground moisture content and the thermal inertia of the model were severely underestimated. At the same time, the sea surface temperatures produced by the model between the equator and the model's land-sea boundary were far too cool. Both the atmosphere and the ocean model were modified to include a better hydrological cycle and ocean structure. The convective events produced by the modified model possessed the observed frequency and were generated well south of the coastline. The improved simulation of monsoon variability allowed the hydrological cycle feedback to be generalized. It was found that monsoon variability was constrained to lie within the bounds of a positive gradient of a convective intensity potential (I). The function depends primarily on the surface temperature, the availability of moisture and the stability of the lower atmosphere which varies very slowly on the time scale of months. The oscillations of the monsoon perturb the mean convective intensity potential causing local enhancements of the gradient. These perturbations are caused by the hydrological feedbacks, discussed above, or by the modification of the air-sea fluxes caused by variations of the low level wind during convective events. The final result is the slow northward propagation of convection within an even slower convective regime. The ECMWF analyses show very similar behavior of the convective intensity potential. Although it is considered premature to use the model to conduct simulations of the African monsoon system, the ECMWF analysis indicates similar behavior in the convective intensity potential suggesting, at least, that the same processes control the low frequency structure of the African monsoon. The implications of the hypotheses on numerical weather prediction of monsoon phenomenon are discussed.