887 resultados para network cost models
Resumo:
The identification of biomarkers of vascular cognitive impairment is urgent for its early diagnosis. The aim of this study was to detect and monitor changes in brain structure and connectivity, and to correlate them with the decline in executive function. We examined the feasibility of early diagnostic magnetic resonance imaging (MRI) to predict cognitive impairment before onset in an animal model of chronic hypertension: Spontaneously Hypertensive Rats. Cognitive performance was tested in an operant conditioning paradigm that evaluated learning, memory, and behavioral flexibility skills. Behavioral tests were coupled with longitudinal diffusion weighted imaging acquired with 126 diffusion gradient directions and 0.3 mm(3) isometric resolution at 10, 14, 18, 22, 26, and 40 weeks after birth. Diffusion weighted imaging was analyzed in two different ways, by regional characterization of diffusion tensor imaging (DTI) indices, and by assessing changes in structural brain network organization based on Q-Ball tractography. Already at the first evaluated times, DTI scalar maps revealed significant differences in many regions, suggesting loss of integrity in white and gray matter of spontaneously hypertensive rats when compared to normotensive control rats. In addition, graph theory analysis of the structural brain network demonstrated a significant decrease of hierarchical modularity, global and local efficacy, with predictive value as shown by regional three-fold cross validation study. Moreover, these decreases were significantly correlated with the behavioral performance deficits observed at subsequent time points, suggesting that the diffusion weighted imaging and connectivity studies can unravel neuroimaging alterations even overt signs of cognitive impairment become apparent.
Resumo:
Automaattisen mittarinluvun yleistyminen ja asiakkaan verkkoliitynnässä käytettävän tekniikan kehittyminen luovat pohjan uudentyyppisen interaktiivisen asiakasrajanpinnan synnylle. Se voi osaltaan mahdollistaa asiakkaan entistä joustavamman sähköverkkoon liitynnän sekä nykyistä reaaliaikaisemmat ja tarkemmat mittaukset. Näiden pohjalle on mahdollista kehittää erilaisia energiatehokkuutta tukevia toimintoja ja niihin perustuvia palveluita. Tämän työn tarkoituksena on tutkia interaktiivisen asiakasrajapinnan mahdollistamia energiatehokkuutta tukevia toimintoja. Lupaavimpia toimintoja, niiden kannattavuutta ja potentiaalia energiatehokkuuden parantamisessa analysoidaan tarkemmin. Lisäksi tarkastellaan niihin tarvittavaa tekniikkaa, mittaustietoja ja tiedonsiirtoa. Nykyinen tekniikka mahdollistaa useiden erilaisten energiatehokkuutta tukevien toimintojen toteuttamisen. Tässä työssä käsiteltiin tarkemmin energiayhtiön AMR-pohjaista tasehallintaa ja sähkön pienkuluttajien hintaohjausta. AMR-pohjaisen tasehallinnan havaittiin olevan oikein kohdennettuna kannattavaa. Sähkön hintaohjaus voi laajassa mittakaavassa toteutettuna olla kannattavaa, mutta yksittäiskohteissa sen toteutuksen kustannukset ovat liian suuret. Suurimpia ongelmia energiatehokkuutta tukevien toimintojen toteutuksen kannalta muodostavat usein kiinteät kustannukset sekä yleisten rajapin-tavaatimusten ja toimintamallien puute. Tuotteiden standardointi, sarjatuotanto sekä tekniikan kehittyminen voivat mahdollistaa kiinteiden kustannusten huomattavan pienenemisen ja tätä kautta toimintojen kustannustehokkuuden paranemisen. Kehittämällä uusia yhteisiä toimintamalleja ja tuotteita voidaan käytettävissä olevaa tekniikkaa hyödyntää tehokkaammin. Myös uudet näköpiirissä olevat nopeammat ja luotettavammat tiedonsiirtotekniikat voivat mahdollistaa reaaliaikaisemmat mittaustietojen ja signaalien välitykset, mikä usein parantaa toimintojen tehokkuutta ja kannattavuutta.
Resumo:
The objective of this paper was to show the potential additional insight that result from adding greenhouse gas (GHG) emissions to plant performance evaluation criteria, such as effluent quality (EQI) and operational cost (OCI) indices, when evaluating (plant-wide) control/operational strategies in wastewater treatment plants (WWTPs). The proposed GHG evaluation is based on a set of comprehensive dynamic models that estimate the most significant potential on-site and off-site sources of CO2, CH4 and N2O. The study calculates and discusses the changes in EQI, OCI and the emission of GHGs as a consequence of varying the following four process variables: (i) the set point of aeration control in the activated sludge section; (ii) the removal efficiency of total suspended solids (TSS) in the primary clarifier; (iii) the temperature in the anaerobic digester; and (iv) the control of the flow of anaerobic digester supernatants coming from sludge treatment. Based upon the assumptions built into the model structures, simulation results highlight the potential undesirable effects of increased GHG production when carrying out local energy optimization of the aeration system in the activated sludge section and energy recovery from the AD. Although off-site CO2 emissions may decrease, the effect is counterbalanced by increased N2O emissions, especially since N2O has a 300-fold stronger greenhouse effect than CO2. The reported results emphasize the importance and usefulness of using multiple evaluation criteria to compare and evaluate (plant-wide) control strategies in a WWTP for more informed operational decision making
Resumo:
This work presents the use of potentiometric measurements for kinetic studies of biosorption of Cd2+ ions from aqueous solutions on Eichhornia crassipes roots. The open circuit potential of the Cd/Cd2+ electrode of the first kind was measured during the bioadsorption process. The amount of Cd2+ ions accumulated was determined in real time. The data were fit to different models, with the pseudo-second-order model proving to be the best in describing the data. The advantages and limitations of the methodology proposed relative to the traditional method are discussed.
Resumo:
To understand the physicochemical properties and catalytic activity during the pyrolysis of atmospheric petroleum residue, a template-free ZSM-5 zeolite was synthesized using a direct method without additional seeds or an organic structure director and compared with conventionally synthesized ZSM-5. The crystallinities of the two zeolites were evaluated by XRD and FTIR and were quite similar; however, structural analyses using SEM and argon physisorption revealed that the zeolites diverged in particle diameter and in the external surface area of the micropores. The synthesis procedure without a template incorporated additional aluminum into the crystalline network, according to ICP-AES and TPD NH3 experiments. The catalytic pyrolysis performed over the template-free ZSM-5 generated results comparable to those for pyrolysis performed over the conventional ZSM-5 according to its hydrocarbon distribution. The selectivity to aromatics compounds was exactly the same for both ZSM-5 zeolites, and these values stand out compared to thermal pyrolysis. The template-free ZSM-5 produced 20% of light hydrocarbons (C4-C6), where such compounds are olefins and paraffins of great interest to the petrochemical industry. Therefore, template-free ZSM-5 is promising for industrial use due to its lowered synthesis time, low-cost and significant distribution to light hydrocarbons.
Resumo:
Modern sophisticated telecommunication devices require even more and more comprehensive testing to ensure quality. The test case amount to ensure well enough coverage of testing has increased rapidly and this increased demand cannot be fulfilled anymore only by using manual testing. Also new agile development models require execution of all test cases with every iteration. This has lead manufactures to use test automation more than ever to achieve adequate testing coverage and quality. This thesis is separated into three parts. Evolution of cellular networks is presented at the beginning of the first part. Also software testing, test automation and the influence of development model for testing are examined in the first part. The second part describes a process which was used to implement test automation scheme for functional testing of LTE core network MME element. In implementation of the test automation scheme agile development models and Robot Framework test automation tool were used. In the third part two alternative models are presented for integrating this test automation scheme as part of a continuous integration process. As a result, the test automation scheme for functional testing was implemented. Almost all new functional level testing test cases can now be automated with this scheme. In addition, two models for integrating this scheme to be part of a wider continuous integration pipe were introduced. Also shift from usage of a traditional waterfall model to a new agile development based model in testing stated to be successful.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Resumo:
The goal of this study is to examine the intelligent home business network in order to determine which part of the network has the best financial abilities to produce new business models and products/services by using financial statement analysis. A group of 377 studied limited companies is divided into four examined segments based on their offering in producing intelligent homes. The segments are customer service providers, system integrators, subsystem suppliers and component suppliers. Eight different key figures are calculated from each of the companies to get a comprehensive view of their financial performances, after which each of the segments is studied statistically to determine the performances of the whole segments. The actual performance differences between the segments are calculated by using the multi-criteria decision analysis method in which the performances of the key figures are graded and each key figure is weighted according to its importance for the goal of the study. The results of this analysis showed that subsystem suppliers have the best financial performance. Second best are system integrators, third are customer service providers and fourth component suppliers. None of the segments were strikingly poor, but even component suppliers were rather reasonable in their performance; so, it can be said that no part of the intelligent home business network has remarkably inadequate financial abilities to develop new business models and products/services.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
Cloud computing enables on-demand network access to shared resources (e.g., computation, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the data centers. Software as a service (SaaS) is part of cloud computing. It is one of the cloud service models. SaaS is software deployed as a hosted service and accessed over the Internet. In SaaS, the consumer uses the provider‘s applications running in the cloud. SaaS separates the possession and ownership of software from its use. The applications can be accessed from any device through a thin client interface. A typical SaaS application is used with a web browser based on monthly pricing. In this thesis, the characteristics of cloud computing and SaaS are presented. Also, a few implementation platforms for SaaS are discussed. Then, four different SaaS implementation cases and one transformation case are deliberated. The pros and cons of SaaS are studied. This is done based on literature references and analysis of the SaaS implementations and the transformation case. The analysis is done both from the customer‘s and service provider‘s point of view. In addition, the pros and cons of on-premises software are listed. The purpose of this thesis is to find when SaaS should be utilized and when it is better to choose a traditional on-premises software. The qualities of SaaS bring many benefits both for the customer as well as the provider. A customer should utilize SaaS when it provides cost savings, ease, and scalability over on-premises software. SaaS is reasonable when the customer does not need tailoring, but he only needs a simple, general-purpose service, and the application supports customer‘s core business. A provider should utilize SaaS when it offers cost savings, scalability, faster development, and wider customer base over on-premises software. It is wise to choose SaaS when the application is cheap, aimed at mass market, needs frequent updating, needs high performance computing, needs storing large amounts of data, or there is some other direct value from the cloud infrastructure.
Resumo:
The objective of this study was to verify the potential of SNAP III (Scheduling and Network Analysis Program) as a support tool for harvesting and wood transport planning in Brazil harvesting subsystem definition and establishment of a compatible route were assessed. Initially, machine operational and production costs were determined in seven subsystems for the study area, and quality indexes, construction and maintenance costs of forest roads were obtained and used as SNAP III program input data. The results showed, that three categories of forest road occurrence were observed in the study area: main, secondary and tertiary which, based on quality index, allowed a medium vehicle speed of about 41, 30 and 24 km/hours and a construction cost of about US$ 5,084.30, US$ 2,275.28 and US$ 1,650.00/km, respectively. The SNAP III program used as a support tool for the planning, was found to have a high potential tool in the harvesting and wood transport planning. The program was capable of defining efficiently, the harvesting subsystem on technical and economical basis, the best wood transport route and the forest road to be used in each period of the horizon planning.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.
Resumo:
Quality is not only free but it can be a profit maker. Every dollar that is not spent on doing things wrong becomes a dollar right on the bottom line. The main objective of this thesis is to give an answer on how cost of poor quality can be measured theoretically correctly. Different calculation methods for cost of poor quality are presented and discussed in order to give comprehensive picture about measurement process. The second objective is to utilize the knowledge from the literature review and to apply it when creating a method for measuring cost of poor quality in supplier performance rating. Literature review indicates that P-A-F model together with ABC methodology provides a mean for quality cost calculations. These models give an answer what should be measured and how this measurement should be carried out. However, when product or service quality costs are incurred when quality character derivates from target value, then QLF seems to be most appropriate methodology for quality cost calculation. These methodologies were applied when creating a quality cost calculation method for supplier performance ratings.
Resumo:
The Artificial Neural Networks (ANNs) are mathematical models method capable of estimating non-linear response plans. The advantage of these models is to present different responses of the statistical models. Thus, the objective of this study was to develop and to test ANNs for estimating rainfall erosivity index (EI30) as a function of the geographical location for the state of Rio de Janeiro, Brazil and generating a thematic visualization map. The characteristics of latitude, longitude e altitude using ANNs were acceptable to estimating EI30 and allowing visualization of the space variability of EI30. Thus, ANN is a potential option for the estimate of climatic variables in substitution to the traditional methods of interpolation.