870 resultados para Consumption Predicting Model
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
Alachlor has been a commonly applied herbicide and is a substance of ecotoxicological concern. The present study aims to identify molecular biomarkers in the eukaryotic model Saccharomyces cerevisiae that can be used to predict potential cytotoxic effects of alachlor, while providing new mechanistic clues with possible relevance for experimentally less accessible eukaryotes. It focuses on genome-wide expression profiling in a yeast population in response to two exposure scenarios exerting effects from slight to moderate magnitude at phenotypic level. In particular, 100 and 264 genes, respectively, were found as differentially expressed on a 2-h exposure of yeast cells to the lowest observed effect concentration (110 mg/L) and the 20% inhibitory concentration (200 mg/L) of alachlor, in comparison with cells not exposed to the herbicide. The datasets of alachlor-responsive genes showed functional enrichment in diverse metabolic, transmembrane transport, cell defense, and detoxification categories. In general, the modifications in transcript levels of selected candidate biomarkers, assessed by quantitative reverse transcriptase polymerase chain reaction, confirmed the microarray data and varied consistently with the growth inhibitory effects of alachlor. Approximately 16% of the proteins encoded by alachlor-differentially expressed genes were found to share significant homology with proteins from ecologically relevant eukaryotic species. The biological relevance of these results is discussed in relation to new insights into the potential adverse effects of alachlor in health of organisms from ecosystems, particularly in worst-case situations such as accidental spills or careless storage, usage, and disposal.
Resumo:
The present study is based upon a multidimensional model of successful aging. It aims to identify subgroups of centenarians sharing communalities in successful aging profiles, and determine the role of sociodemographic factors and psychological, social, and economic resources on successful aging.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to provide the best results and was used to create the predictive model. The model was compiled to a stand-alone application with a graphical user interface using the MATLAB CompilerTM.
Resumo:
Cardiovascular disease is one of the leading causes of death around the world. Resting heart rate has been shown to be a strong and independent risk marker for adverse cardiovascular events and mortality, and yet its role as a predictor of risk is somewhat overlooked in clinical practice. With the aim of highlighting its prognostic value, the role of resting heart rate as a risk marker for death and other adverse outcomes was further examined in a number of different patient populations. A systematic review of studies that previously assessed the prognostic value of resting heart rate for mortality and other adverse cardiovascular outcomes was presented. New analyses of nine clinical trials were carried out. Both the original and extended Cox model that allows for analysis of time-dependent covariates were used to evaluate and compare the predictive value of baseline and time-updated heart rate measurements for adverse outcomes in the CAPRICORN, EUROPA, PROSPER, PERFORM, BEAUTIFUL and SHIFT populations. Pooled individual patient meta-analyses of the CAPRICORN, EPHESUS, OPTIMAAL and VALIANT trials, and the BEAUTIFUL and SHIFT trials, were also performed. The discrimination and calibration of the models applied were evaluated using Harrell’s C-statistic and likelihood ratio tests, respectively. Finally, following on from the systematic review, meta-analyses of the relation between baseline and time-updated heart rate, and the risk of death from any cause and from cardiovascular causes, were conducted. Both elevated baseline and time-updated resting heart rates were found to be associated with an increase in the risk of mortality and other adverse cardiovascular events in all of the populations analysed. In some cases, elevated time-updated heart rate was associated with risk of events where baseline heart rate was not. Time-updated heart rate also contributed additional information about the risk of certain events despite knowledge of baseline heart rate or previous heart rate measurements. The addition of resting heart rate to the models where resting heart rate was found to be associated with risk of outcome improved both discrimination and calibration, and in general, the models including time-updated heart rate along with baseline or the previous heart rate measurement had the highest and similar C-statistics, and thus the greatest discriminative ability. The meta-analyses demonstrated that a 5bpm higher baseline heart rate was associated with a 7.9% and an 8.0% increase in the risk of all-cause and cardiovascular death, respectively (both p less than 0.001). Additionally, a 5bpm higher time-updated heart rate (adjusted for baseline heart rate in eight of the ten studies included in the analyses) was associated with a 12.8% (p less than 0.001) and a 10.9% (p less than 0.001) increase in the risk of all-cause and cardiovascular death, respectively. These findings may motivate health care professionals to routinely assess resting heart rate in order to identify individuals at a higher risk of adverse events. The fact that the addition of time-updated resting heart rate improved the discrimination and calibration of models for certain outcomes, even if only modestly, strengthens the case that it be added to traditional risk models. The findings, however, are of particular importance, and have greater implications for the clinical management of patients with pre-existing disease. An elevated, or increasing heart rate over time could be used as a tool, potentially alongside other established risk scores, to help doctors identify patient deterioration or those at higher risk, who might benefit from more intensive monitoring or treatment re-evaluation. Further exploration of the role of continuous recording of resting heart rate, say, when patients are at home, would be informative. In addition, investigation into the cost-effectiveness and optimal frequency of resting heart rate measurement is required. One of the most vital areas for future research is the definition of an objective cut-off value for the definition of a high resting heart rate.
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers’ consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed.
Resumo:
We present a multiscale model bridging length and time scales from molecular to continuum levels with the objective of predicting the yield behavior of amorphous glassy polyethylene (PE). Constitutive pa- rameters are obtained from molecular dynamics (MD) simulations, decreasing the requirement for ad- hoc experiments. Consequently, we achieve: (1) the identification of multisurface yield functions; (2) the high strain rate involved in MD simulations is upscaled to continuum via quasi-static simulations. Validation demonstrates that the entire multisurface yield functions can be scaled to quasi-static rates where the yield stresses are possibly predicted by a proposed scaling law; (3) a hierarchical multiscale model is constructed to predict temperature and strain rate dependent yield strength of the PE.
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers’ consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed.
Resumo:
In this article we use an autoregressive fractionally integrated moving average approach to measure the degree of fractional integration of aggregate world CO2 emissions and its five components – coal, oil, gas, cement, and gas flaring. We find that all variables are stationary and mean reverting, but exhibit long-term memory. Our results suggest that both coal and oil combustion emissions have the weakest degree of long-range dependence, while emissions from gas and gas flaring have the strongest. With evidence of long memory, we conclude that transitory policy shocks are likely to have long-lasting effects, but not permanent effects. Accordingly, permanent effects on CO2 emissions require a more permanent policy stance. In this context, if one were to rely only on testing for stationarity and non-stationarity, one would likely conclude in favour of non-stationarity, and therefore that even transitory policy shocks
Resumo:
This PhD thesis reports the main activities carried out during the 3 years long “Mechanics and advanced engineering sciences” course, at the Department of Industrial Engineering of the University of Bologna. The research project title is “Development and analysis of high efficiency combustion systems for internal combustion engines” and the main topic is knock, one of the main challenges for boosted gasoline engines. Through experimental campaigns, modelling activity and test bench validation, 4 different aspects have been addressed to tackle the issue. The main path goes towards the definition and calibration of a knock-induced damage model, to be implemented in the on-board control strategy, but also usable for the engine calibration and potentially during the engine design. Ionization current signal capabilities have been investigated to fully replace the pressure sensor, to develop a robust on-board close-loop combustion control strategy, both in knock-free and knock-limited conditions. Water injection is a powerful solution to mitigate knock intensity and exhaust temperature, improving fuel consumption; its capabilities have been modelled and validated at the test bench. Finally, an empiric model is proposed to predict the engine knock response, depending on several operating condition and control parameters, including injected water quantity.
Resumo:
This work resumes a wide variety of research activities carried out with the main objective of increasing the efficiency and reducing the fuel consumption of Gasoline Direct Injection engines, especially under high loads. For this purpose, two main innovative technologies have been studied, Water Injection and Low-Pressure Exhaust Gas Recirculation, which help to reduce the temperature of the gases inside the combustion chamber and thus mitigate knock, being this one of the main limiting factors for the efficiency of modern downsized engines that operate at high specific power. A prototypal Port Water Injection system was developed and extensive experimental work has been carried out, initially to identify the benefits and limitations of this technology. This led to the subsequent development and testing of a combustion controller, which has been implemented on a Rapid Control Prototyping environment, capable of managing water injection to achieve knock mitigation and a more efficient combustion phase. Regarding Low-Pressure Exhaust Gas Recirculation, a commercial engine that was already equipped with this technology was used to carry out experimental work in a similar fashion to that of water injection. Another prototypal water injection system has been mounted to this second engine, to be able to test both technologies, at first separately to compare them on equal conditions, and secondly together in the search of a possible synergy. Additionally, based on experimental data from several engines that have been tested during this study, including both GDI and GCI engines, a real-time model (or virtual sensor) for the estimation of the maximum in-cylinder pressure has been developed and validated. This parameter is of vital importance to determine the speed at which damage occurs on the engine components, and therefore to extract the maximum performance without inducing permanent damages.
Resumo:
The study analyses the calibration process of a newly developed high-performance plug-in hybrid electric passenger car powertrain. The complexity of modern powertrains and the more and more restrictive regulations regarding pollutant emissions are the primary challenges for the calibration of a vehicle’s powertrain. In addition, the managers of OEM need to know as earlier as possible if the vehicle under development will meet the target technical features (emission included). This leads to the necessity for advanced calibration methodologies, in order to keep the development of the powertrain robust, time and cost effective. The suggested solution is the virtual calibration, that allows the tuning of control functions of a powertrain before having it built. The aim of this study is to calibrate virtually the hybrid control unit functions in order to optimize the pollutant emissions and the fuel consumption. Starting from the model of the conventional vehicle, the powertrain is then hybridized and integrated with emissions and aftertreatments models. After its validation, the hybrid control unit strategies are optimized using the Model-in-the-Loop testing methodology. The calibration activities will proceed thanks to the implementation of a Hardware-in-the-Loop environment, that will allow to test and calibrate the Engine and Transmission control units effectively, besides in a time and cost saving manner.
Resumo:
Waste prevention (WP) is a strategy which helps societies and individuals to strive for sufficiency in resource consumption within planetary boundaries alongside sustainable and equitable well-being and to decouple the concepts of well-being and life satisfaction from materialism. Within this dissertation, some instruments to promote WP are analysed, by adopting two perspectives: firstly, the one of policymakers, at different governance levels, and secondly, the one of business in the electrical and electronic equipment (EEE) sector. At a national level, the role of WP programmes and market-based instruments (extended producer responsibility, pay-as-you-throw schemes, deposit-refund systems, environmental taxes) in boosting prevention of municipal solid waste is investigated. Then, focusing on the Emilia-Romagna Region (Italy), the performances of the waste management system are assessed over a long period, including some years before and after an institutional reform of the waste management governance regime. The impact of a centralisation (at a regional level) of both planning and economic regulation of the waste services on waste generation and WP is analysed. Finally, to support the regional decision-makers in the prioritisation of publicly funded projects for WP, a framework for the sustainability assessment, the evaluation of success, and the prioritisation of WP measures was applied to some projects implemented by Municipalities in the Region. Trying to close the research gap between engineering and business, WP strategies are discussed as drivers for business model (BM) innovation in EEE sector. Firstly, an innovative approach to a digital tracking solution for professional EEE management is analysed. New BMs which facilitate repair, reuse, remanufacturing, and recycling are created and discussed. Secondly, the impact of BMs based on servitisation and on producer ownership on the extension of equipment lifetime is analysed, by performing a review of real cases of organizations in the EEE sector applying result- and use-oriented BMs.
Resumo:
Cancer is a challenging disease that involves multiple types of biological interactions in different time and space scales. Often computational modelling has been facing problems that, in the current technology level, is impracticable to represent in a single space-time continuum. To handle this sort of problems, complex orchestrations of multiscale models is frequently done. PRIMAGE is a large EU project that aims to support personalized childhood cancer diagnosis and prognosis. The goal is to do so predicting the growth of the solid tumour using multiscale in-silico technologies. The project proposes an open cloud-based platform to support decision making in the clinical management of paediatric cancers. The orchestration of predictive models is in general complex and would require a software framework that support and facilitate such task. The present work, proposes the development of an updated framework, referred herein as the VPH-HFv3, as a part of the PRIMAGE project. This framework, a complete re-writing with respect to the previous versions, aims to orchestrate several models, which are in concurrent development, using an architecture as simple as possible, easy to maintain and with high reusability. This sort of problem generally requires unfeasible execution times. To overcome this problem was developed a strategy of particularisation, which maps the upper-scale model results into a smaller number and homogenisation which does the inverse way and analysed the accuracy of this approach.