959 resultados para Feature scale simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complete biological nutrient removal (BNR) in a single tank, sequencing batch reactor (SBR) process, is demonstrated here at full-scale on a typical domestic wastewater. The unique feature of the UniFed process is the introduction of the influent into the settled sludge blanket during the settling and decant periods of the SBR operation. This achieves suitable conditions for denitrification and anaerobic phosphate release which is critical to successful biological phosphorus removal, It also achieves a selector effect, which helps in generating a compact, well settling biomass in the reactor. The results of this demonstration show that it is possible to achieve well over 90% removal of GOD, nitrogen and phosphorus in such a process. Effluent quality achieved over a six-month operating period directly after commissioning was: 29 mg/l GOD, 0.5 mg/l NH4-N, 1.5 mg/l NOx-N and 1.5 mg/l PO4-P (50%-iles of daily samples). During an 8-day, intensive sampling period, the effluent BOD5 was

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainable development concerns made renewable energy sources to be increasingly used for electricity distributed generation. However, this is mainly due to incentives or mandatory targets determined by energy policies as in European Union. Assuring a sustainable future requires distributed generation to be able to participate in competitive electricity markets. To get more negotiation power in the market and to get advantages of scale economy, distributed generators can be aggregated giving place to a new concept: the Virtual Power Producer (VPP). VPPs are multi-technology and multisite heterogeneous entities that should adopt organization and management methodologies so that they can make distributed generation a really profitable activity, able to participate in the market. This paper presents ViProd, a simulation tool that allows simulating VPPs operation, in the context of MASCEM, a multi-agent based eletricity market simulator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Química. Ramo optimização energética na indústria química.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In spite of the significant amount of scientific work in Wireless Sensor Networks (WSNs), there is a clear lack of effective, feasible and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster abstract outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON relies on a hierarchical network architecture together with integrated middleware and command&control mechanisms. It has been designed to use standard commercially– available technologies, while maintaining as much flexibility as possible to meet specific applications’ requirements. The EMMON WSN architecture has been validated through extensive simulation and experimental evaluation, including through a 300+ node test-bed, the largest WSN test-bed in Europe to date

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. It provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to maintain as much as flexibility as possible while meeting specific applications requirements. EMMON has been validated through extensive analytical, simulation and experimental evaluations, including through a 300+ nodes test-bed the largest single-site WSN test-bed in Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution of the evapotranspiration from a certain region to the precipitation over the same area is referred to as water recycling. In this paper, we explore the spatiotemporal links between the recycling mechanism and the Iberian rainfall regime. We use a 9 km resolution Weather Research and Forecasting simulation of 18 years (1990-2007) to compute local and regional recycling ratios over Iberia, at the monthly scale, through both an analytical and a numerical recycling model. In contrast to coastal areas, the interior of Iberia experiences a relative maximum of precipitation in spring, suggesting a prominent role of land-atmosphere interactions on the inland precipitation regime during this period of the year. Local recycling ratios are the highest in spring and early summer, coinciding with those areas where this spring peak of rainfall represents the absolute maximum in the annual cycle. This confirms that recycling processes are crucial to explain the Iberian spring precipitation, particularly over the eastern and northeastern sectors. Average monthly recycling values range from 0.04 in December to 0.14 in June according to the numerical model and from 0.03 in December to 0.07 in May according to the analytical procedure. Our analysis shows that the highest values of recycling are limited by the coexistence of two necessary mechanisms: (1) the availability of sufficient soil moisture and (2) the occurrence of appropriate synoptic configurations favoring the development of convective regimes. The analyzed surplus of rainfall in spring has a critical impact on agriculture over large semiarid regions of the interior of Iberia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear Dynamics, Vol. 29

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Proceedings of the “ECCTD '01 - European Conference on Circuit Theory and Design, Espoo, Finland, August 2001

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1st European IAHR Congress,6-4 May, Edinburg, Scotland

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature discretization (FD) techniques often yield adequate and compact representations of the data, suitable for machine learning and pattern recognition problems. These representations usually decrease the training time, yielding higher classification accuracy while allowing for humans to better understand and visualize the data, as compared to the use of the original features. This paper proposes two new FD techniques. The first one is based on the well-known Linde-Buzo-Gray quantization algorithm, coupled with a relevance criterion, being able perform unsupervised, supervised, or semi-supervised discretization. The second technique works in supervised mode, being based on the maximization of the mutual information between each discrete feature and the class label. Our experimental results on standard benchmark datasets show that these techniques scale up to high-dimensional data, attaining in many cases better accuracy than existing unsupervised and supervised FD approaches, while using fewer discretization intervals.