851 resultados para large-scale systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are widely used for solving large-scale and data-intensive computing problems, including all-to-all comparison (ATAC) problems. However, when used for ATAC problems, existing computational frameworks such as Hadoop focus on load balancing for allocating comparison tasks, without careful consideration of data distribution and storage usage. While Hadoop-based solutions provide users with simplicity of implementation, their inherent MapReduce computing pattern does not match the ATAC pattern. This leads to load imbalances and poor data locality when Hadoop's data distribution strategy is used for ATAC problems. Here we present a data distribution strategy which considers data locality, load balancing and storage savings for ATAC computing problems in homogeneous distributed systems. A simulated annealing algorithm is developed for data distribution and task scheduling. Experimental results show a significant performance improvement for our approach over Hadoop-based solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooperative Intelligent Transportation Systems (C-ITS) allow in-vehicle systems, and ultimately the driver, to enhance their awareness of their surroundings by enabling communication between vehicles and road infrastructure. C-ITS are widely considered as the next major step in driving assistance systems, aiming at increasing safety, comfort and mobility for drivers. However, any communicating systems are subjected to security threats. A key component for providing secure communications at a large scale is a Public Key Infrastructure (PKI). Due to the safety-critical nature of Vehicle-to-Vehicle (V2V) communications, a C-ITS PKI has functional, performance and scalability requirements that differ from traditional non-automotive environments. This paper identifies and defines the key functional and security requirements for C-ITS PKI systems and analyses proposed C-ITS PKI standards against these requirements. In particular, the proposed US and European C-ITS PKI systems are identified as being too complex and not scalable. The paper also highlights various privacy, security and scalability concerns that should be considered for a secure C-ITS PKI solution in the Australian transport landscape.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Point sources of wastewater pollution, including effluent from municipal sewage treatment plants and intensive livestock and processing industries, can contribute significantly to the degradation of receiving waters (Chambers et al. 1997; Productivity Commission 2004). This has led to increasingly stringent local wastewater discharge quotas (particularly regarding Nitrogen, Phosphorous and suspended solids), and many municipal authorities and industry managers are now faced with upgrading their existing treatment facilities in order to comply. However, with high construction, energy and maintenance expenses and increasing labour costs, traditional wastewater treatment systems are becoming an escalating financial burden for the communities and industries that operate them. This report was generated, in the first instance, for the Burdekin Shire Council to provide information on design aspects and parameters critical for developing duckweed-based wastewater treatment (DWT) in the Burdekin region. However, the information will be relevant to a range of wastewater sources throughout Queensland. This information has been collated from published literature and both overseas and local studies of pilot and full-scale DWT systems. This report also considers options to generate revenue from duckweed production (a significant feature of DWT), and provides specifications and component cost information (current at the time of publication) for a large-scale demonstration of an integrated DWT and fish production system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This joint DPI/Burdekin Shire Council project assessed the efficacy of a pilot-scale biological remediation system to recover Nitrogen (N) and Phosphorous (P) nutrients from secondary treated municipal wastewater at the Ayr Sewage Treatment Plant. Additionally, this study considered potential commercial uses for by-products from the treatment system. Knowledge gained from this study can provide directions for implementing a larger-scale final effluent treatment protocol on site at the Ayr plant. Trials were conducted over 10 months and assessed nutrient removal from duckweed-based treatments and an algae/fish treatment – both as sequential and as stand-alone treatment systems. A 42.3% reduction in Total N was found through the sequential treatment system (duckweed followed by algae/fish treatment) after 6.6 days Effluent Retention Time (E.R.T.). However, duckweed treatment was responsible for the majority of this nutrient recovery (7.8 times more effective than algae/fish treatment). Likewise, Total P reduction (15.75% reduction after 6.6 days E.R.T.) was twice as great in the duckweed treatment. A phytoplankton bloom, which developed in the algae/fish tanks, reduced nutrient recovery in this treatment. A second trial tested whether the addition of fish enhanced duckweed treatment by evaluating systems with and without fish. After four weeks operation, low DO under the duckweed blanket caused fish mortalities. Decomposition of these fish led to an additional organic load and this was reflected in a breakdown of nitrogen species that showed an increase in organic nitrogen. However, the Dissolved Inorganic Nitrogen (DIN: ammonia, nitrite and nitrate) removal was similar between treatments with and without fish (57% and 59% DIN removal from incoming, respectively). Overall, three effluent residence times were evaluated using duckweed-based treatments; i.e. 3.5 days, 5.5 days and 10.4 days. Total N removal was 37.5%, 55.7% and 70.3%, respectively. The 10.4-day E.R.T. trial, however, was evaluated by sequential nutrient removal through the duckweed-minus-fish treatment followed by the duckweed-plus-fish treatment. Therefore, the 70.3% Total N removal was lower than could have been achieved at this retention time due to the abovementioned fish mortalities. Phosphorous removal from duckweed treatments was greatest after 10.4-days E.R.T. (13.6%). Plant uptake was considered the most important mechanism for this P removal since there was no clay substrate in the plastic tanks that could have contributed to P absorption as part of the natural phosphorous cycle. Duckweed inhibited phytoplankton production (therefore reducing T.S.S) and maintained pH close to neutral. DO beneath the duckweed blanket fell to below 1ppm; however, this did not limit plant production. If fish are to be used as part of the duckweed treatment, air-uplifts can be installed that maintain DO levels without disturbing surface waters. Duckweed grown in the treatments doubled its biomass on average every 5.7 days. On a per-surface area basis, 1.23kg/m2 was harvested weekly. Moisture content of duckweed was 92%, equating to a total dry weight harvest of 0.098kg/m2/week. Nutrient analysis of dried duckweed gave an N content of 6.67% and a P content of 1.27%. According to semi-quantitative analyses, harvested duckweed contained no residual elements from the effluent stream that were greater than ANZECC toxicant guidelines proposed for aquaculture. In addition, jade perch, a local aquaculture species, actively consumed and gained weight on harvested duckweed, suggesting potential for large-scale fish production using by-products from the effluent treatment process. This suggests that a duckweed-based system may be one viable option for tertiary treatment of Ayr municipal wastewater. The tertiary detention lagoon proposed by the Burdekin Shire Council, consisting of six bays approximately 290 x 35 metres (x 1.5 metres deep), would be suitable for duckweed culture with minor modification to facilitate the efficient distribution of duckweed plants across the entire available growing surface (such as floating containment grids). The effluent residence time resulting from this proposed configuration (~30 days) should be adequate to recover most effluent nutrients (certainly N) based on the current trial. Duckweed harvest techniques on this scale, however, need to be further investigated. Based on duckweed production in the current trial (1.23kg/m2/week), a weekly harvest of approximately 75 000kg (wet weight) could be expected from the proposed lagoon configuration under full duckweed production. A benefit of the proposed multi-bay lagoon is that full lagoon production of duckweed may not be needed to restore effluent to a desirable standard under the present nutrient load, and duckweed treatment may be restricted to certain bays. Restored effluent could be released without risk of contaminating the receiving waterway with duckweed by evacuating water through an internal standpipe located mid-way in the water column.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to extend understanding of how large firms pursuing sustained and profitable growth manage organisational renewal. A multiple-case study was conducted in 27 North American and European wood-industry companies, of which 11 were chosen for closer study. The study combined the organisational-capabilities approach to strategic management with corporate-entrepreneurship thinking. It charted the further development of an identification and classification system for capabilities comprising three dimensions: (i) the dynamism between firm-specific and industry-significant capabilities, (ii) hierarchies of capabilities and capability portfolios, and (iii) their internal structure. Capability building was analysed in the context of the organisational design, the technological systems and the type of resource-bundling process (creating new vs. entrenching existing capabilities). The thesis describes the current capability portfolios and the organisational changes in the case companies. It also clarifies the mechanisms through which companies can influence the balance between knowledge search and the efficiency of knowledge transfer and integration in their daily business activities, and consequently the diversity of their capability portfolio and the breadth and novelty of their product/service range. The largest wood-industry companies of today must develop a seemingly dual strategic focus: they have to combine leading-edge, innovative solutions with cost-efficient, large-scale production. The use of modern technology in production was no longer a primary source of competitiveness in the case companies, but rather belonged to the portfolio of basic capabilities. Knowledge and information management had become an industry imperative, on a par with cost effectiveness. Yet, during the period of this research, the case companies were better in supporting growth in volume of the existing activity than growth through new economic activities. Customer-driven, incremental innovation was preferred over firm-driven innovation through experimentation. The three main constraints on organisational renewal were the lack of slack resources, the aim for lean, centralised designs, and the inward-bound communication climate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have derived explicitly, the large scale distribution of quantum Ohmic resistance of a disordered one-dimensional conductor. We show that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder we recover single parameter scaling, consistent with existing theoretical treatments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.