964 resultados para real-scale battery


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aimed to evaluate some aspects of construction and operation of full-scale best management practices and their effects on infiltration loading rate. Two systems were studied: i) filter-swale-trench (FST) and ii) infiltration well (IW). In these units, field essays and full-scale measurements were made in order to assess the soil permeability. Further, the fine particles transportation and the geotextile blanket permeability were determined before and after operation. The results pointed out that there were soil transportation to inside the FST and IW systems, despite of the installed protections, and that these material were responsible for diminishing the geotextile (reduction from 30 to 90% for FST and 40 to 70% for IW) and the full-scale infiltration loading rates (varying from 4,7 x 10-6 to 10-5 m.s-1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to investigate the performance of an experimental rainwater treatment system for non-potable uses. Without the first-flush discharge it was expected to control the quality of captured rainwater and to minimize the rainwater by-pass caused by the first-flush strategy. A full-scale direct filtration unit was operated and a solution of natural corn starch was used as the primary coagulant. The color, turbidity e coliform efficiencies of the unit was analyzed based on filtration loads and the net water production was estimated. The results pointed out turbidity removal up to 70.8% and color removal up to 61.0%. The backwash of the filtering system was completed in 3 minutes at the rate of 1,440 m3/m2day with consumption of treated water from 0.5% to 2.2%, based on the potentially harvesting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

India has a third of the world’s tuberculosis cases. Large-scale expansion of a national program in 1998 has allowed for population-based analyses of data from tuberculosis registries. We assessed seasonal trends using quarterly reports from districts with stable tuberculosis control programs (population 115 million). In northern India, tuberculosis diagnoses peaked between April and June, and reached a nadir between October and December, whereas no seasonality was reported in the south. Overall, rates of new smear-positive tuberculosis cases were 57 per 100 000 population in peak seasons versus 46 per 100 000 in trough seasons. General health-seeking behavior artifact was ruled out. Seasonality was highest in paediatric cases, suggesting variation in recent transmission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many recent survival studies propose modeling data with a cure fraction, i.e., data in which part of the population is not susceptible to the event of interest. This event may occur more than once for the same individual (recurrent event). We then have a scenario of recurrent event data in the presence of a cure fraction, which may appear in various areas such as oncology, finance, industries, among others. This paper proposes a multiple time scale survival model to analyze recurrent events using a cure fraction. The objective is analyzing the efficiency of certain interventions so that the studied event will not happen again in terms of covariates and censoring. All estimates were obtained using a sampling-based approach, which allows information to be input beforehand with lower computational effort. Simulations were done based on a clinical scenario in order to observe some frequentist properties of the estimation procedure in the presence of small and moderate sample sizes. An application of a well-known set of real mammary tumor data is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Executive dysfunction is reported in juvenile myoclonic epilepsy (JME). However, batteries employed in previous studies included no more than three tests of executive function. In this study, we aimed to assess executive and attentional functions in JME using a comprehensive battery of eight tests (encompassing fifteen subtests). We also evaluated neuropsychological profiles using a clinical criterion of severity and correlated these findings with epilepsy clinical variables and the presence of psychiatric disorders. We prospectively evaluated 42 patients with JME and a matched control group with Digit Span tests (forward and backward), Stroop Color-Word Test, Trail Making Test, Wisconsin Card-Sorting Test, Matching Familiar Figures Test and Word Fluency Test. We estimated IQ with the Matrix Reasoning and Vocabulary subtests of the Wechsler Abbreviated Intelligence Scale. The patients with JME showed specific deficits in working memory, inhibitory control, concept formation, goal maintenance, mental flexibility, and verbal fluency. We observed attentional deficits in processes such as alertness and attention span and those requiring sustained and divided attention. We found that 83.33% of the patients had moderate or severe executive dysfunction. In addition, attentional and executive impairment was correlated with higher frequency of seizures and the presence of psychiatric disorders. Furthermore, executive dysfunction correlated with a longer duration of epilepsy. Our findings indicate the need for comprehensive neuropsychological batteries in patients with JME, in order to provide a more extensive evaluation of attentional and executive functions and to show that some relevant deficits have been overlooked. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To show data on the performance of healthy subjects in the Frontal Assessment Battery (FAB), correlating with gender, age, education, and scores in the Mini-Mental State Examination (MMSE). Methods: Two hundred and seventy-five healthy individuals with mean age of 66.4 +/- 10.6 years-old were evaluated. Mean total FAB scores were established according to the educational level. Results: Mean total FAB scores according to the educational level were 10.9 +/- 2.3, for one to three years; 12.8 +/- 2.7, for four to seven years; 13.8 +/- 2.2, for eight to 11 years; and 15.3 +/- 2.3, for 12 or more years. Total FAB scores correlated significantly with education (r=0.47; p<0.0001) and MMSE scores (r=0.39; p<0.0001). No correlation emerged between FAB scores, age, and gender. Conclusion: In this group of healthy subjects, the Brazilian version of the FAB proved to be influenced by the education level, but not by age and gender.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] This paper presents an interpretation of a classic optical flow method by Nagel and Enkelmann as a tensor-driven anisotropic diffusion approach in digital image analysis. We introduce an improvement into the model formulation, and we establish well-posedness results for the resulting system of parabolic partial differential equations. Our method avoids linearizations in the optical flow constraint, and it can recover displacement fields which are far beyond the typical one-pixel limits that are characteristic for many differential methods for optical flow recovery. A robust numerical scheme is presented in detail. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales. The high accuracy of the proposed method is demonstrated by means of a synthetic and a real-world image sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The energy released during a seismic crisis in volcanic areas is strictly related to the physical processes in the volcanic structure. In particular Long Period seismicity, that seems to be related to the oscillation of a fluid-filled crack (Chouet , 1996, Chouet, 2003, McNutt, 2005), can precedes or accompanies an eruption. The present doctoral thesis is focused on the study of the LP seismicity recorded in the Campi Flegrei volcano (Campania, Italy) during the October 2006 crisis. Campi Flegrei Caldera is an active caldera; the combination of an active magmatic system and a dense populated area make the Campi Flegrei a critical volcano. The source dynamic of LP seismicity is thought to be very different from the other kind of seismicity ( Tectonic or Volcano Tectonic): it’s characterized by a time sustained source and a low content in frequency. This features implies that the duration–magnitude, that is commonly used for VT events and sometimes for LPs as well, is unadapted for LP magnitude evaluation. The main goal of this doctoral work was to develop a method for the determination of the magnitude for the LP seismicity; it’s based on the comparison of the energy of VT event and LP event, linking the energy to the VT moment magnitude. So the magnitude of the LP event would be the moment magnitude of a VT event with the same energy of the LP. We applied this method to the LP data-set recorded at Campi Flegrei caldera in 2006, to an LP data-set of Colima volcano recorded in 2005 – 2006 and for an event recorded at Etna volcano. Experimenting this method to lots of waveforms recorded at different volcanoes we tested its easy applicability and consequently its usefulness in the routinely and in the quasi-real time work of a volcanological observatory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.