917 resultados para C51 - Model Construction and Estimation
Resumo:
This thesis focuses on the energy efficiency in wireless networks under the transmission and information diffusion points of view. In particular, on one hand, the communication efficiency is investigated, attempting to reduce the consumption during transmissions, while on the other hand the energy efficiency of the procedures required to distribute the information among wireless nodes in complex networks is taken into account. For what concerns energy efficient communications, an innovative transmission scheme reusing source of opportunity signals is introduced. This kind of signals has never been previously studied in literature for communication purposes. The scope is to provide a way for transmitting information with energy consumption close to zero. On the theoretical side, starting from a general communication channel model subject to a limited input amplitude, the theme of low power transmission signals is tackled under the perspective of stating sufficient conditions for the capacity achieving input distribution to be discrete. Finally, the focus is shifted towards the design of energy efficient algorithms for the diffusion of information. In particular, the endeavours are aimed at solving an estimation problem distributed over a wireless sensor network. The proposed solutions are deeply analyzed both to ensure their energy efficiency and to guarantee their robustness against losses during the diffusion of information (against information diffusion truncation more in general).
Resumo:
We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.
Resumo:
We partially solve a long-standing problem in the proof theory of explicit mathematics or the proof theory in general. Namely, we give a lower bound of Feferman’s system T0 of explicit mathematics (but only when formulated on classical logic) with a concrete interpretat ion of the subsystem Σ12-AC+ (BI) of second order arithmetic inside T0. Whereas a lower bound proof in the sense of proof-theoretic reducibility or of ordinalanalysis was already given in 80s, the lower bound in the sense of interpretability we give here is new. We apply the new interpretation method developed by the author and Zumbrunnen (2015), which can be seen as the third kind of model construction method for classical theories, after Cohen’s forcing and Krivine’s classical realizability. It gives us an interpretation between classical theories, by composing interpretations between intuitionistic theories.
Resumo:
The construction industry, one of the most important ones in the development of a country, generates unavoidable impacts on the environment. The social demand towards greater respect for the environment is a high and general outcry. Therefore, the construction industry needs to reduce the impact it produces. Proper waste management is not enough; we must take a further step in environmental management, where new measures need to be introduced for the prevention at source, such as good practices to promote recycling. Following the amendment of the legal frame applicable to Construction and Demolition Waste (C&D waste), important developments have been incorporated in European and International laws, aiming to promote the culture of reusing and recycling. This change of mindset, that is progressively taking place in society, is allowing for the consideration of C&D waste no longer as an unusable waste, but as a reusable material. The main objective of the work presented in this paper is to enhance C&D waste management systems through the development of preventive measures during the construction process. These measures concern all the agents intervening in the construction process as only the personal implication of all of them can ensure an efficient management of the C&D waste generated. Finally, a model based on preventive measures achieves organizational cohesion between the different stages of the construction process, as well as promoting the conservation of raw materials through the use and waste minimization. All of these in order to achieve a C&D waste management system, whose primary goal is zero waste generation
Resumo:
The difficulty of dealing with construction and demolition waste (CDW) on construction sites is not new and continues to be a significant environmental problem. Currently the CDW collection system in Spain is done in a decentralized manner by each sub-contracted company, being necessary to implement effective waste management measures ensuring a correct management and minimization. During the last years several measures have been launched in order to improve and encourage the reuse and recycling of CDW. A widespread solution for CDW recovery is using them as a landscaping aggregate or for road bases and sub-bases. However, measures encouraging onsite prevention still need to be enhanced. This paper studies the major work stage generating CDW and analyses the categories of CDW produced during its execution. For this, several real building sites have been analysed in order to quantify the estimation of CDW generated. Results of this study show that a significant contributor to the CDW generation on building construction sites in Spain are the masonry works. Finally, a Best Practices Manual (BPM) is proposed containing several strategies on masonry works aimed not only at CDW prevention, but also at improving their management and minimization. The use of this BPM together with the Study and Plan of CDW management --required by law--, promotes the environmental management of the company, favouring the cohesion of the construction process organization at all stages giving rise to establishing responsibilities in the field of waste and providing a greater control over the process. Keywords: construction and demolition waste, management, masonry works, good practice measures, prevention.
Resumo:
Current EU Directives force the Member States to assure by 2020 that 70% of the Construction and Demolition (C&D) waste is recovered instead of landfilled. While some countries have largely achieved this target, others still have a long way to go. For better understanding the differences arising from local disparities, six factors related to technical, economic, legislative and environmental aspects have been identified as crucial influences in the market share of C&D waste recycling solutions. These factors are able to identify the causes that limit the recycling rate of a certain region. Moreover, progress towards an efficient waste management can vary through the improvement of a single factor. This study provides the background for further fine-tuning the factors and their combination into a mathematical model for assessing the market share of C&D recycling solutions.
Resumo:
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.
Resumo:
Strategic sourcing has increased in importance in recent years, and now plays an important role in companies’ planning. The current volatility in supply markets means companies face multiple challenges involving lock-in situations, supplier bankruptcies or supply security issues. In addition, their exposure can increase due to natural disasters, as witnessed recently in the form of bird flu, volcanic ash and tsunamis. Therefore, the primary focus of this study is risk management in the context of strategic sourcing. The study presents a literature review on sourcing based on the 15 years from 1998–2012, and considers 131 academic articles. The literature describes strategic sourcing as a strategic, holistic process in managing supplier relationships, with a long-term focus on adding value to the company and realising competitive advantage. Few studies discovered the real risk impact and status of risk management in strategic sourcing, and evaluation across countries and industries was limited, with the construction sector particularly under-researched. This methodology is founded on a qualitative study of twenty cases across Ger-many and the United Kingdom from the construction sector and electronics manufacturing industries. While considering risk management in the context of strategic sourcing, the thesis takes into account six dimensions that cover trends in strategic sourcing, theoretical and practical sourcing models, risk management, supply and demand management, critical success factors and the strategic supplier evaluation. The study contributes in several ways. First, recent trends are traced and future needs identified across the research dimensions of countries, industries and companies. Second, it evaluates critical success factors in contemporary strategic sourcing. Third, it explores the application of theoretical and practical sourcing models in terms of effectiveness and sustainability. Fourth, based on the case study findings, a risk-oriented strategic sourcing framework and a model for strategic sourcing are developed. These are based on the validation of contemporary requirements and a critical evaluation of the existing situation. It contemplates the empirical findings and leads to a structured process to manage risk in strategic sourcing. The risk-oriented framework considers areas such as trends, corporate and sourcing strategy, critical success factors, strategic supplier selection criteria, risk assessment, reporting, strategy alignment and reporting. The proposed model highlights the essential dimensions in strategic sourcing and guides us to a new definition of strategic sourcing supported by this empirical study.
Resumo:
An approach for knowledge extraction from the information arriving to the knowledge base input and also new knowledge distribution over knowledge subsets already present in the knowledge base is developed. It is also necessary to realize the knowledge transform into parameters (data) of the model for the following decision-making on the given subset. It is assumed to realize the decision-making with the fuzzy sets’ apparatus.
Resumo:
This paper outlines a novel elevation linear Fresnel reflector (ELFR) and presents and validates theoretical models defining its thermal performance. To validate the models, a series of experiments were carried out for receiver temperatures in the range of 30-100 °C to measure the heat loss coefficient, gain in heat transfer fluid (HTF) temperature, thermal efficiency, and stagnation temperature. The heat loss coefficient was underestimated due to the model exclusion of collector end heat losses. The measured HTF temperature gains were found to have a good correlation to the model predictions - less than a 5% difference. In comparison to model predictions for the thermal efficiency and stagnation temperature, measured values had a difference of -39% to +31% and 22-38%, respectively. The difference between the measured and predicted values was attributed to the low-temperature region for the experiments. It was concluded that the theoretical models are suitable for examining linear Fresnel reflector (LFR) systems and can be adopted by other researchers.
Resumo:
ix Ocean Drilling Program (ODP) sites, in the Northwest Atlantic have been used to investigate kinematic and chemical changes in the "Western Boundary Undercurrent" (WBUC) during the development of full glacial conditions across the Marine Isotope Stage 5a/4 boundary (~70,000 years ago). Sortable silt mean grain size(sort s) measurements are employed to examine changes in near bottom flow speeds, together with carbon isotopes measured in benthic foraminifera and % planktic foraminiferal fragmentation as proxies for changes in water-mass chemistry. A depth transect of cores, spanning 1.8-4.6 km depth, allows changes in both the strength and depth of the WBUC to be constrained across millennial scale events. Sort s measurements reveal that the flow speed structure of the WBUC during warm intervals ("interstadials") was comparable to modern (Holocene) conditions. However, significant differences are observed during cold intervals, with higher relative flow speeds inferred for the shallow component of the WBUC (~2 km depth) during all cold "stadial" intervals (including Heinrich Stadial 6), and a substantial weakening of the deep component (~3-4 km) during full glacial conditions. Our results therefore reveal that the onset of full glacial conditions was associated with a regime shift to a shallower mode of circulation (involving Glacial North Atlantic Intermediate Water) that was quantitatively distinct from preceding cold stadial events. Furthermore, our chemical proxy data show that the physical response of the WBUC during the last glacial inception was probably coupled to basin-wide changes in the water-mass composition of the deep Northwest Atlantic.
Resumo:
The viscosity of ionic liquids (ILs) has been modeled as a function of temperature and at atmospheric pressure using a new method based on the UNIFAC–VISCO method. This model extends the calculations previously reported by our group (see Zhao et al. J. Chem. Eng. Data 2016, 61, 2160–2169) which used 154 experimental viscosity data points of 25 ionic liquids for regression of a set of binary interaction parameters and ion Vogel–Fulcher–Tammann (VFT) parameters. Discrepancies in the experimental data of the same IL affect the quality of the correlation and thus the development of the predictive method. In this work, mathematical gnostics was used to analyze the experimental data from different sources and recommend one set of reliable data for each IL. These recommended data (totally 819 data points) for 70 ILs were correlated using this model to obtain an extended set of binary interaction parameters and ion VFT parameters, with a regression accuracy of 1.4%. In addition, 966 experimental viscosity data points for 11 binary mixtures of ILs were collected from literature to establish this model. All the binary data consist of 128 training data points used for the optimization of binary interaction parameters and 838 test data points used for the comparison of the pure evaluated values. The relative average absolute deviation (RAAD) for training and test is 2.9% and 3.9%, respectively.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.