906 resultados para Complex systems prediction


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Complex Event processing (CEP) has emerged over the last ten years. CEP systems are outstanding in processing large amount of data and responding in a timely fashion. While CEP applications are fast growing, performance management in this area has not gain much attention. It is critical to meet the promised level of service for both system designers and users. In this paper, we present a benchmark for complex event processing systems: CEPBen. The CEPBen benchmark is designed to evaluate CEP functional behaviours, i.e., filtering, transformation and event pattern detection and provides a novel methodology of evaluating the performance of CEP systems. A performance study by running the CEPBen on Esper CEP engine is described and discussed. The results obtained from performance tests demonstrate the influences of CEP functional behaviours on the system performance. © 2014 Springer International Publishing Switzerland.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Since wind at the earth's surface has an intrinsically complex and stochastic nature, accurate wind power forecasts are necessary for the safe and economic use of wind energy. In this paper, we investigated a combination of numeric and probabilistic models: a Gaussian process (GP) combined with a numerical weather prediction (NWP) model was applied to wind-power forecasting up to one day ahead. First, the wind-speed data from NWP was corrected by a GP, then, as there is always a defined limit on power generated in a wind turbine due to the turbine controlling strategy, wind power forecasts were realized by modeling the relationship between the corrected wind speed and power output using a censored GP. To validate the proposed approach, three real-world datasets were used for model training and testing. The empirical results were compared with several classical wind forecast models, and based on the mean absolute error (MAE), the proposed model provides around 9% to 14% improvement in forecasting accuracy compared to an artificial neural network (ANN) model, and nearly 17% improvement on a third dataset which is from a newly-built wind farm for which there is a limited amount of training data. © 2013 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work reports on a new software for solving linear systems involving affine-linear dependencies between complex-valued interval parameters. We discuss the implementation of a parametric residual iteration for linear interval systems by advanced communication between the system Mathematica and the library C-XSC supporting rigorous complex interval arithmetic. An example of AC electrical circuit illustrates the use of the presented software.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Society depends on complex IT systems created by integrating and orchestrating independently managed systems. The incredible increase in scale and complexity in them over the past decade means new software-engineering techniques are needed to help us cope with their inherent complexity. The key characteristic of these systems is that they are assembled from other systems that are independently controlled and managed. While there is increasing awareness in the software engineering community of related issues, the most relevant background work comes from systems engineering. The interacting algos that led to the Flash Crash represent an example of a coalition of systems, serving the purposes of their owners and cooperating only because they have to. The owners of the individual systems were competing finance companies that were often mutually hostile. Each system jealously guarded its own information and could change without consulting any other system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bio-systems are inherently complex information processing systems. Furthermore, physiological complexities of biological systems limit the formation of a hypothesis in terms of behavior and the ability to test hypothesis. More importantly the identification and classification of mutation in patients are centric topics in today's cancer research. Next generation sequencing (NGS) technologies can provide genome-wide coverage at a single nucleotide resolution and at reasonable speed and cost. The unprecedented molecular characterization provided by NGS offers the potential for an individualized approach to treatment. These advances in cancer genomics have enabled scientists to interrogate cancer-specific genomic variants and compare them with the normal variants in the same patient. Analysis of this data provides a catalog of somatic variants, present in tumor genome but not in the normal tissue DNA. In this dissertation, we present a new computational framework to the problem of predicting the number of mutations on a chromosome for a certain patient, which is a fundamental problem in clinical and research fields. We begin this dissertation with the development of a framework system that is capable of utilizing published data from a longitudinal study of patients with acute myeloid leukemia (AML), who's DNA from both normal as well as malignant tissues was subjected to NGS analysis at various points in time. By processing the sequencing data at the time of cancer diagnosis using the components of our framework, we tested it by predicting the genomic regions to be mutated at the time of relapse and, later, by comparing our results with the actual regions that showed mutations (discovered at relapse time). We demonstrate that this coupling of the algorithm pipeline can drastically improve the predictive abilities of searching a reliable molecular signature. Arguably, the most important result of our research is its superior performance to other methods like Radial Basis Function Network, Sequential Minimal Optimization, and Gaussian Process. In the final part of this dissertation, we present a detailed significance, stability and statistical analysis of our model. A performance comparison of the results are presented. This work clearly lays a good foundation for future research for other types of cancer.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

“Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia. In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lactic acid bacteria expolysaccharides (LAB-EPS), in particular those formed from sucrose have the potential to improve food and beverage rheology and enhance their sensory properties potentially replacing or reducing expensive hydrocolloids currently used as improvers in food and beverage industries. Addition of sucrose not only enables EPS formation but also affects organic acid formation, thus influencing the sensory properties of the resulting food/beverage products. The first part of the study the organoleptic modulation of barley malt derived wort fermented using in situ produced bacterial polysaccharides has been investigated. Weisella cibaria MG1 was capable to produce exopolysaccharides during sucrosesupplemented barley malt derived wort fermentation. Even though the strain dominated the (sucrose-supplemented) wort fermentation, it was found to produce EPS (14.4 g l-1) with lower efficiency than in SucMRS (34.6 g l-1). Higher maltose concentration in wort led to the increased formation of oligosaccharide (OS) at the expense of EPS. Additionally, small amounts of organic acids were formed and ethanol remained below 0.5% (v/v). W. cibaria MG1 fermented worts supplemented with 5 or 10% sucrose displayed a shear-thinning behaviour indicating the formation of polymers. This report showed how novel and nutritious LAB fermented wort-base beverage with prospects for further advancements can be formulated using tailored microbial cultures. In the next step, the impact of exopolysaccharide-producing Weissella cibaria MG1 on the ability to improve rheological properties of fermented plant-based milk substitute plant based soy and quinoa grain was evaluated. W. cibaria MG1 grew well in soy milk, exceeding a cell count of log 8 cfu/g within 6 h of fermentation. The presence of W. cibaria MG1 led to a decrease in gelation and fermentation time. EPS isolated from soy yoghurts supplemented with sucrose were higher in molecular weight (1.1 x 108 g/mol vs 6.6 x 107 g/mol), and resulted in reduced gel stiffness (190 ± 2.89 Pa vs 244 ± 15.9 Pa). Soy yoghurts showed typical biopolymer gels structure and the network structure changed to larger pores and less cross-linking in the presence of sucrose and increasing molecular weight of the EPS. In situ investigation of Weissella cibaria MG1 producing EPS on quinoa-based milk was performed. The production of quinoa milk, starting from wholemeal quinoa flour, was optimised to maximise EPS production. On doing that, enzymatic destructuration of protein and carbohydrate components of quinoa milk was successfully achieved applying alpha-amylase and proteases treatments. Fermented wholemeal quinoa milk using Weissella cibaria MG1 showed high viable cell counts (>109 cfu/mL), a pH of 5.16, and significantly higher water holding capacity (WHC, 100 %), viscosity (> 0. 5 Pa s) and exopolysaccharide (EPS) amount (40 mg/L) than the chemically acidified control. High EPS (dextran) concentration in quinoa milk caused earlier aggregation because more EPS occupy more space, and the chenopodin were forced to interact with each other. Direct observation of microstructure in fermented quinoa milk indicated that the network structures of EPS-protein could improve the texture of fermented quinoa milk. Overall, Weissella cibaria MG1 showed favorable technology properties and great potential for further possible application in the development of high viscosity fermented quinoa milk. The last part of the study investigate the ex-situ LAB-EPS (dextran) application compared to other hydrocolloids as a novel food ingredient to compensate for low protein in biscuit and wholemeal wheat flour. Three hydrocolloids, xanthan gum, dextran and hydroxypropyl methylcellulose, were incorporated into bread recipes based on high-protein flours, low-protein flours and coarse wholemeal flour. Hydrocolloid levels of 0–5 % (flour basis) were used in bread recipes to test the water absorption. The quality parameters of dough (farinograph, extensograph, rheofermentometre) and bread (specific volume, crumb structure and staling profile) were determined. Results showed that xanthan had negative impact on the dough and bread quality characteristics. HPMC and dextran generally improved dough and bread quality and showed dosage dependence. Volume of low-protein flour breads were significantly improved by incorporation of 0.5 % of the latter two hydrocolloids. However, dextran outperformed HPMC regarding initial bread hardness and staling shelf life regardless the flour applied in the formulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Data Processing Department of ISHC has developed coding forms to be used for the data to be entered into the program. The Highway Planning and Programming and the Design Departments are responsible for coding and submitting the necessary data forms to Data Processing for the noise prediction on the highway sections.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The work presented herein covers a broad range of research topics and so, in the interest of clarity, has been presented in a portfolio format. Accordingly, each chapter consists of its own introductory material prior to presentation of the key results garnered, this is then proceeded by a short discussion on their significance. In the first chapter, a methodology to facilitate the resolution and qualitative assessment of very large inorganic polyoxometalates was designed and implemented employing ion-mobility mass spectrometry. Furthermore, the potential of this technique for ‘mapping’ the conformational space occupied by this class of materials was demonstrated. These claims are then substantiated by the development of a tuneable, polyoxometalate-based calibration protocol that provided the necessary platform for quantitative assessments of similarly large, but unknown, polyoxometalate species. In addition, whilst addressing a major limitation of travelling wave ion mobility, this result also highlighted the potential of this technique for solution-phase cluster discovery. The second chapter reports on the application of a biophotovoltaic electrochemical cell for characterising the electrogenic activity inherent to a number of mutant Synechocystis strains. The intention was to determine the key components in the photosynthetic electron transport chain responsible for extracellular electron transfer. This would help to address the significant lack of mechanistic understanding in this field. Finally, in the third chapter, the design and fabrication of a low-cost, highly modular, continuous cell culture system is presented. To demonstrate the advantages and suitability of this platform for experimental evolution investigations, an exploration into the photophysiological response to gradual iron limitation, in both the ancestral wild type and a randomly generated mutant library population, was undertaken. Furthermore, coupling random mutagenesis to continuous culture in this way is shown to constitute a novel source of genetic variation that is open to further investigation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Part 7: Cyber-Physical Systems