935 resultados para Leontief Input-Output model
Resumo:
A system is said to be "instantaneous" when for a given constant input an equilibrium output is obtained after a while. In the meantime, the output is changing from its initial value towards the equilibrium one. This is the transient period of the system and transients are important features of open-respirometry systems. During transients, one cannot compute the input amplitude directly from the output. The existing models (e.g., first or second order dynamics) cannot account for many of the features observed in real open-respirometry systems, such as time lag. Also, these models do not explain what should be expected when a system is speeded up or slowed down. The purpose of the present study was to develop a mechanistic approach to the dynamics of open-respirometry systems, employing basic thermodynamic concepts. It is demonstrated that all the main relevant features of the output dynamics are due to and can be adequately explained by a distribution of apparent velocities within the set of molecules travelling along the system. The importance of the rate at which the molecules leave the sensor is explored for the first time. The study approaches the difference in calibrating a system with a continuous input and with a "unit impulse": the former truly reveals the dynamics of the system while the latter represents the first derivative (in time) of the former and, thus, cannot adequately be employed in the apparent time-constant determination. Also, we demonstrate why the apparent order of the output changes with volume or flow.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
We tested the hypothesis that the inability to increase cardiac output during exercise would explain the decreased rate of oxygen uptake (VO2) in recent onset, ischemia-induced heart failure rats. Nine normal control rats and 6 rats with ischemic heart failure were studied. Myocardial infarction was induced by coronary ligation. VO2 was measured during a ramp protocol test on a treadmill using a metabolic mask. Cardiac output was measured with a flow probe placed around the ascending aorta. Left ventricular end-diastolic pressure was higher in ischemic heart failure rats compared with normal control rats (17 ± 0.4 vs 8 ± 0.8 mmHg, P = 0.0001). Resting cardiac index (CI) tended to be lower in ischemic heart failure rats (P = 0.07). Resting heart rate (HR) and stroke volume index (SVI) did not differ significantly between ischemic heart failure rats and normal control rats. Peak VO2 was lower in ischemic heart failure rats (73.72 ± 7.37 vs 109.02 ± 27.87 mL min-1 kg-1, P = 0.005). The VO2 and CI responses during exercise were significantly lower in ischemic heart failure rats than in normal control rats. The temporal response of SVI, but not of HR, was significantly lower in ischemic heart failure rats than in normal control rats. Peak CI, HR, and SVI were lower in ischemic heart failure rats. The reduction in VO2 response during incremental exercise in an ischemic model of heart failure is due to the decreased cardiac output response, largely caused by depressed stroke volume kinetics.
Resumo:
The aim of the present study was to determine the ventilation/perfusion ratio that contributes to hypoxemia in pulmonary embolism by analyzing blood gases and volumetric capnography in a model of experimental acute pulmonary embolism. Pulmonary embolization with autologous blood clots was induced in seven pigs weighing 24.00 ± 0.6 kg, anesthetized and mechanically ventilated. Significant changes occurred from baseline to 20 min after embolization, such as reduction in oxygen partial pressures in arterial blood (from 87.71 ± 8.64 to 39.14 ± 6.77 mmHg) and alveolar air (from 92.97 ± 2.14 to 63.91 ± 8.27 mmHg). The effective alveolar ventilation exhibited a significant reduction (from 199.62 ± 42.01 to 84.34 ± 44.13) consistent with the fall in alveolar gas volume that effectively participated in gas exchange. The relation between the alveolar ventilation that effectively participated in gas exchange and cardiac output (V Aeff/Q ratio) also presented a significant reduction after embolization (from 0.96 ± 0.34 to 0.33 ± 0.17 fraction). The carbon dioxide partial pressure increased significantly in arterial blood (from 37.51 ± 1.71 to 60.76 ± 6.62 mmHg), but decreased significantly in exhaled air at the end of the respiratory cycle (from 35.57 ± 1.22 to 23.15 ± 8.24 mmHg). Exhaled air at the end of the respiratory cycle returned to baseline values 40 min after embolism. The arterial to alveolar carbon dioxide gradient increased significantly (from 1.94 ± 1.36 to 37.61 ± 12.79 mmHg), as also did the calculated alveolar (from 56.38 ± 22.47 to 178.09 ± 37.46 mL) and physiological (from 0.37 ± 0.05 to 0.75 ± 0.10 fraction) dead spaces. Based on our data, we conclude that the severe arterial hypoxemia observed in this experimental model may be attributed to the reduction of the V Aeff/Q ratio. We were also able to demonstrate that V Aeff/Q progressively improves after embolization, a fact attributed to the alveolar ventilation redistribution induced by hypocapnic bronchoconstriction.
Resumo:
The importance of university-company collaboration has increased during the last decades. The drivers for that are, on the one hand, changes in business logic of companies and on the other hand the decreased state funding of universities. Many companies emphasize joint research with universities as an enabling input to their development processes, which aim at creating new innovations, products and wealth. These factors have changed universities’ operations and they have adopted several practices of dynamic business organizations, such as strategic planning, monitoring and controlling methods of internal processes etc. The objective of this thesis is to combine different characteristics of successful university-company partnership and its development. The development process starts with identifying potential partners in the university’s interest group, which requires understanding the role of different partners in the innovation system. Next, in order to find a common development basis, matching the policy and strategy between partners is needed. The third phase is to combine the academic and industrial objectives of a joint project, which is a typical form of university-company collaboration. The optimum is a win-win situation where both partners, universities and companies, can get addedvalue. For the companies added value typically means access to new research results before their competitors. For the universities added value offers a possibility to carry on high level scientific work. The research output in the form of published scientific articles is evaluated by the international science community. Because the university-company partnership is often executed by joint projects, the different forms of this kind of projects is discussed in this study. The most challenging form of collaboration is a semi-open project model, which is not based on bilateral activities between universities and companies but on a consortium of several universities, research institutes and companies. The universities and companies are core actors in the innovation system. Thus the discussion of their roles and relations to public operators like publicly funded financiers is important. In the Finnish innovation system there are at least the following doers executing strategies and policies: EU, Academy of Finland and TEKES. In addition to these, Strategic Centres for Science, Technology and Innovation which are owned jointly by companies, universities and research organizations have a very important role in their fields of business. They transfer research results into commercial actions to generate wealth. The thesis comprises two parts. The first part consists of an overview of the study including introduction, literature review, research design, synthesis of findings and conclusions. The second part introduces four original research publications.
Resumo:
Circadian timing is structured in such a way as to receive information from the external and internal environments, and its function is the timing organization of the physiological and behavioral processes in a circadian pattern. In mammals, the circadian timing system consists of a group of structures, which includes the suprachiasmatic nucleus (SCN), the intergeniculate leaflet and the pineal gland. Neuron groups working as a biological pacemaker are found in the SCN, forming a biological master clock. We present here a simple model for the circadian timing system of mammals, which is able to reproduce two fundamental characteristics of biological rhythms: the endogenous generation of pulses and synchronization with the light-dark cycle. In this model, the biological pacemaker of the SCN was modeled as a set of 1000 homogeneously distributed coupled oscillators with long-range coupling forming a spherical lattice. The characteristics of the oscillator set were defined taking into account the Kuramoto's oscillator dynamics, but we used a new method for estimating the equilibrium order parameter. Simultaneous activities of the excitatory and inhibitory synapses on the elements of the circadian timing circuit at each instant were modeled by specific equations for synaptic events. All simulation programs were written in Fortran 77, compiled and run on PC DOS computers. Our model exhibited responses in agreement with physiological patterns. The values of output frequency of the oscillator system (maximal value of 3.9 Hz) were of the order of magnitude of the firing frequencies recorded in suprachiasmatic neurons of rodents in vivo and in vitro (from 1.8 to 5.4 Hz).
Resumo:
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Resumo:
Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.
Resumo:
At present, one of the main concerns of green network is to minimize the power consumption of network infrastructure. Surveys show that, the highest amount of power is consumed by the network devices during its runtime. However to control this power consumption it is important to know which factors has highest impact on this matter. This paper is focused on the measurement and modeling the power consumption of an Ethernet switch during its runtime considering various types of input parameters with all possible combinations. For the experiment, three input parameters are chosen. They are bandwidth, link load and number of connections. The output to be measured is the power consumption of the Ethernet switch. Due to the uncertain power consuming pattern of the Ethernet switch a fully-comprehensive experimental evaluation would require an unfeasible and cumbersome experimental phase. Because of that, design of experiment (DoE) method has been applied to obtain adequate information on the effects of each input parameters on the power consumption. The whole work consists of three parts. In the first part a test bed is planned with input parameters and the power consumption of the switch is measured. The second part is about generating a mathematical model with the help of design of experiment tools. This model can be used for measuring precise power consumption in different scenario and also pinpoint the parameters with higher influence in power consumption. And in the last part, the mathematical model is evaluated by comparing with the experimental values.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
The focus of the research is on the derivation of the valid and reliable performance results regarding establishment and launching of the new full-scale industrial facility, considering the overall current conditions for the project realization in and out of Russia. The study demonstrates the process of the new facility concept development, with following perfor-mance calculation, comparative analyzes conduction, life-cycle simulations, performance indicators derivation and project`s sustainability evaluation. To unite and process the entire input parameters complexity, regards the interlacing between the project`s internal technical and commercial sides on the one hand, and consider all the specifics of the Russian conditions for doing business on the other hand, was developed the unique model for the project`s performance calculation, simulations and results representation. The complete research incorporates all corresponding data to substantiate the assigned facility`s design, sizing and output capacity for high quality and cost efficient ferrous pipe-line accessories manufacturing, as well as, demonstrates that this project could be suc-cessfully realized in current conditions in Russia and highlights the room for significant performance and sustainability improvements based on the indexes of the derived KPIs.
Resumo:
Target of this book is to propose an approach for modelling drivetrain dynamics in order to design further a vibration control system of a hybrid bus. In this thesis two approaches are examined and compared. First model is obtained by theoretical means: drivetrain is represented as a system of rotating masses, which motion is described with differential equations. Second model is obtained using system identification method: mathematical description of the dynamic behavior of a system is formed based on measured input (torque) and output (speed) data. Then two models are compared and an optimal approach is suggested.
Resumo:
The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
Multi-country models have not been very successful in replicating important features of the international transmission of business cycles. Standard models predict cross-country correlations of output and consumption which are respectively too low and too high. In this paper, we build a multi-country model of the business cycle with multiple sectors in order to analyze the role of sectoral shocks in the international transmission of the business cycle. We find that a model with multiple sectors generates a higher cross-country correlation of output than standard one-sector models, and a lower cross-country correlation of consumption. In addition, it predicts cross-country correlations of employment and investment that are closer to the data than the standard model. We also analyze the relative effects of multiple sectors, trade in intermediate goods, imperfect substitution between domestic and foreign goods, home preference, capital adjustment costs, and capital depreciation on the international transmission of the business cycle.