900 resultados para Multi-scheme ensemble prediction system
Resumo:
Zero valent iron nanoparticles (nZVI) are considered very promising for the remediation of contaminated soils and groundwaters. However, an important issue related to their limited mobility remains unsolved. Direct current can be used to enhance the nanoparticles transport, based on the same principles of electrokinetic remediation. In this work, a generalized physicochemical model was developed and solved numerically to describe the nZVI transport through porous media under electric field, and with different electrolytes (with different ionic strengths). The model consists of the Nernst–Planck coupled system of equations, which accounts for the mass balance of ionic species in a fluid medium, when both the diffusion and electromigration of the ions are considered. The diffusion and electrophoretic transport of the negatively charged nZVI particles were also considered in the system. The contribution of electroosmotic flow to the overall mass transport was included in the model for all cases. The nZVI effective mobility values in the porous medium are very low (10−7–10−4 cm2 V−1 s−1), due to the counterbalance between the positive electroosmotic flow and the electrophoretic transport of the negatively charged nanoparticles. The higher the nZVI concentration is in the matrix, the higher the aggregation; therefore, low concentration of nZVI suspensions must be used for successful field application.
Resumo:
The need for more efficient illumination systems has led to the proliferation of Solid-State Lighting (SSL) systems, which offer optimized power consumption. SSL systems are comprised of LED devices which are intrinsically fast devices and permit very fast light modulation. This, along with the congestion of the radio frequency spectrum has paved the path for the emergence of Visible Light Communication (VLC) systems. VLC uses free space to convey information by using light modulation. Notwithstanding, as VLC systems proliferate and cost competitiveness ensues, there are two important aspects to be considered. State-of-the-art VLC implementations use power demanding PAs, and thus it is important to investigate if regular, existent Switched-Mode Power Supply (SMPS) circuits can be adapted for VLC use. A 28 W buck regulator was implemented using a off-the-shelf LED Driver integrated circuit, using both series and parallel dimming techniques. Results show that optical clock frequencies up to 500 kHz are achievable without any major modification besides adequate component sizing. The use of an LED as a sensor was investigated, in a short-range, low-data-rate perspective. Results show successful communication in an LED-to-LED configuration, with enhanced range when using LED strings as sensors. Besides, LEDs present spectral selective sensitivity, which makes them good contenders for a multi-colour LED-to-LED system, such as in the use of RGB displays and lamps. Ultimately, the present work shows evidence that LEDs can be used as a dual-purpose device, enabling not only illumination, but also bi-directional data communication.
Resumo:
An energy harvesting system requires an energy storing device to store the energy retrieved from the surrounding environment. This can either be a rechargeable battery or a supercapcitor. Due to the limited lifetime of rechargeable batteries, they need to be periodically replaced. Therefore, a supercapacitor, which has ideally a limitless number of charge/discharge cycles can be used to store the energy; however, a voltage regulator is required to obtain a constant output voltage as the supercapacitor discharges. This can be implemented by a Switched-Capacitor DC-DC converter which allows a complete integration in CMOS technology, although it requires several topologies in order to obtain a high efficiency. This thesis presents the complete analysis of four different topologies in order to determine expressions that allow to design and determine the optimum input voltage ranges for each topology. To better understand the parasitic effects, the implementation of the capacitors and the non-ideal effect of the switches, in 130 nm technology, were carefully studied. With these two analysis a multi-ratio SC DC-DC converter was designed with an output power of 2 mW, maximum efficiency of 77%, and a maximum output ripple, in the steady state, of 23 mV; for an input voltage swing of 2.3 V to 0.85 V. This proposed converter has four operation states that perform the conversion ratios of 1/2, 2/3, 1/1 and 3/2 and its clock frequency is automatically adjusted to produce a stable output voltage of 1 V. These features are implemented through two distinct controller circuits that use asynchronous time machines (ASM) to dynamically adjust the clock frequency and to select the active state of the converter. All the theoretical expressions as well as the behaviour of the whole system was verified using electrical simulations.
Resumo:
This Work Project analyzes the evolution of the Portuguese personal income tax system’s progressivity over the period of 2005 through 2013. It presents the first computation of cardinal progressivity measures using administrative tax data for Portugal. We compute several progressivity indices and find that progressivity has had very modest variations from 2005 to 2012, whilst from 2012 to 2013 there has been a relatively stronger decrease, excluding the impact of the income tax surcharge of the years 2012 and 2013. When this latter is included, progressivity of 2012 and 2013 decreases considerably. Analyzing the effective average tax rates of the top income percentiles in the income scale, we find that these rates have increased over the period 2010–2013, suggesting that an analysis of effective tax rates is insufficient to assess progressivity in the whole tax scheme.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.
Resumo:
Customer lifetime value (LTV) enables using client characteristics, such as recency, frequency and monetary (RFM) value, to describe the value of a client through time in terms of profitability. We present the concept of LTV applied to telemarketing for improving the return-on-investment, using a recent (from 2008 to 2013) and real case study of bank campaigns to sell long- term deposits. The goal was to benefit from past contacts history to extract additional knowledge. A total of twelve LTV input variables were tested, un- der a forward selection method and using a realistic rolling windows scheme, highlighting the validity of five new LTV features. The results achieved by our LTV data-driven approach using neural networks allowed an improvement up to 4 pp in the Lift cumulative curve for targeting the deposit subscribers when compared with a baseline model (with no history data). Explanatory knowledge was also extracted from the proposed model, revealing two highly relevant LTV features, the last result of the previous campaign to sell the same product and the frequency of past client successes. The obtained results are particularly valuable for contact center companies, which can improve pre- dictive performance without even having to ask for more information to the companies they serve.
Resumo:
Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.
Resumo:
Earthworks tasks aim at levelling the ground surface at a target construction area and precede any kind of structural construction (e.g., road and railway construction). It is comprised of sequential tasks, such as excavation, transportation, spreading and compaction, and it is strongly based on heavy mechanical equipment and repetitive processes. Under this context, it is essential to optimize the usage of all available resources under two key criteria: the costs and duration of earthwork projects. In this paper, we present an integrated system that uses two artificial intelligence based techniques: data mining and evolutionary multi-objective optimization. The former is used to build data-driven models capable of providing realistic estimates of resource productivity, while the latter is used to optimize resource allocation considering the two main earthwork objectives (duration and cost). Experiments held using real-world data, from a construction site, have shown that the proposed system is competitive when compared with current manual earthwork design.
Resumo:
"Lecture notes in computer science series, ISSN 0302-9743, vol. 9273"
Resumo:
The RMR system is still very much applied in rock mechanics engineering context. It is based on the evaluation of six weights to obtain a final rating. To obtain the final rating a considerable amount of information is needed concerning the rock mass which can be difficult to obtain in some projects or project stages at least with accuracy. In 2007 an alternative classification scheme based on the RMR, the Hierarchical Rock Mass Rating (HRMR) was presented. The main feature of this system was the adaptation to the level of knowledge existent about the rock mass to obtain the classification of the rock mass since it followed a decision tree approach. However, the HRMR was only valid for hard rock granites with low fracturing degrees. In this work, the database was enlarged with approximately 40% more cases considering other types of granite rock masses including weathered granites and based on this increased database the system was updated. Granite formations existent in the north of Portugal including Porto city are predominantly granites. Some years ago a light rail infrastructure was built in the city of Porto and surrounding municipalities whi h involved considerable challenges due to the high heterogeneity levels of the granite formations and the difficulties involved in their geomechanical characterization. In this work it is intended to provide also a contribution to improve the characterization of these formations with special emphasis to the weathered horizons. A specific subsystem applicable to the weathered formations was developed. The results of the validation of these systems are presented and show acceptable performances in identifying the correct class using less information than with the RMR system.
Resumo:
In this paper, we propose an extension of the firefly algorithm (FA) to multi-objective optimization. FA is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies at night that is capable of computing global solutions to continuous optimization problems. Our proposal relies on a fitness assignment scheme that gives lower fitness values to the positions of fireflies that correspond to non-dominated points with smaller aggregation of objective function distances to the minimum values. Furthermore, FA randomness is based on the spread metric to reduce the gaps between consecutive non-dominated solutions. The obtained results from the preliminary computational experiments show that our proposal gives a dense and well distributed approximated Pareto front with a large number of points.
Resumo:
The normalized differential cross section for top-quark pair production in association with at least one jet is studied as a function of the inverse of the invariant mass of the tt¯+1-jet system. This distribution can be used for a precise determination of the top-quark mass since gluon radiation depends on the mass of the quarks. The experimental analysis is based on proton--proton collision data collected by the ATLAS detector at the LHC with a centre-of-mass energy of 7 TeV corresponding to an integrated luminosity of 4.6 fb−1. The selected events were identified using the lepton+jets top-quark-pair decay channel, where lepton refers to either an electron or a muon. The observed distribution is compared to a theoretical prediction at next-to-leading-order accuracy in quantum chromodynamics using the pole-mass scheme. With this method, the measured value of the top-quark pole mass, mpolet, is: mpolet =173.7 ± 1.5 (stat.) ± 1.4 (syst.) +1.0−0.5 (theory) GeV. This result represents the most precise measurement of the top-quark pole mass to date.
Resumo:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação