912 resultados para Work Systems
Resumo:
Polarization curves experimentally obtained in the electro-dissolution of iron in a 1 M H2SO4 solution using a rotating disc as the working electrode present a current instability region within the range of applied voltage in which the current is controlled by mass transport in the electrolyte. According to the literature (Barcia et. al., 1992) the electro-dissolution process leads to the existence of a viscosity gradient in the interface metal-solution, which leads to a velocity field quantitatively different form the one developed in uniform viscosity conditions and may affect the stability of the hydrodynamic field. The purpose of this work is to investigate whether a steady viscosity profile, depending on the distance to the electrode surface, affects the stability properties of the classic velocity field near a rotating disc. Two classes of perturbations are considered: perturbations monotonically varying along the radial direction, and perturbations periodically modulated along the radial direction. The results show that the hydrodynamic field is always stable with respect to the first class of perturbations and that the neutral stability curves are modified by the presence of a viscosity gradient in the second case, in the sense of reducing the critical Reynolds number beyond which perturbations are amplified. This result supports the hypothesis that the current oscillations observed in the polarization curve may originate from a hydrodynamic instability.
Resumo:
Complex System is any system that presents involved behavior, and is hard to be modeled by using the reductionist approach of successive subdivision, searching for ''elementary'' constituents. Nature provides us with plenty of examples of these systems, in fields as diverse as biology, chemistry, geology, physics, and fluid mechanics, and engineering. What happens, in general, is that for these systems we have a situation where a large number of both attracting and unstable chaotic sets coexist. As a result, we can have a rich and varied dynamical behavior, where many competing behaviors coexist. In this work, we present and discuss simple mechanical systems that are nice paradigms of Complex System, when they are subjected to random external noise. We argue that systems with few degrees of freedom can present the same complex behavior under quite general conditions.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
This thesis is a literature study that develops a conceptual model of decision making and decision support in service systems. The study is related to the Ä-Logi, Intelligent Service Logic for Welfare Sector Services research project, and the objective of the study is to develop the necessary theoretical framework to enable further research based on the research project results and material. The study first examines the concepts of service and service systems, focusing on understanding the characteristics of service systems and their implications for decision making and decision support to provide the basis for the development of the conceptual model. Based on the identified service system characteristics, an integrated model of service systems is proposed that views service systems through a number of interrelated perspectives that each offer different, but complementary, implications on the nature of decision making and the requirements for decision support in service systems. Based on the model, it is proposed that different types of decision making contexts can be identified in service systems that may be dominated by different types of decision making processes and where different types of decision support may be required, depending on the characteristics of the decision making context and its decision making processes. The proposed conceptual model of decision making and decision support in service systems examines the characteristics of decision making contexts and processes in service systems, and their typical requirements for decision support. First, a characterization of different types of decision making contexts in service systems is proposed based on the Cynefin framework and the identified service system characteristics. Second, the nature of decision making processes in service systems is proposed to be dual, with both rational and naturalistic decision making processes existing in service systems, and having an important and complementary role in decision making in service systems. Finally, a characterization of typical requirements for decision support in service systems is proposed that examines the decision support requirements associated with different types of decision making processes in characteristically different types of decision making contexts. It is proposed that decision support for the decision making processes that are based on rational decision making can be based on organizational decision support models, while decision support for the decision making processes that are based on naturalistic decision making should be based on supporting the decision makers’ situation awareness and facilitating the development of their tacit knowledge of the system and its tasks. Based on the proposed conceptual model a further research process is proposed. The study additionally provides a number of new perspectives on the characteristics of service systems, and the nature of decision making and requirements for decision support in service systems that can potentially provide a basis for further discussion and research, and support the practice alike.
Resumo:
Since cellulose is a linear macromolecule it can be used as a material for regenerated cellulose fiber products e.g. in textile fibers or film manufacturing. Cellulose is not thermoformable, thus the manufacturing of these regenerated fibers is mainly possible through dissolution processes preceding the regeneration process. However, the dissolution of cellulose in common solvents is hindered due to inter- and intra-molecular hydrogen bonds in the cellulose chains, and relatively high crystallinity. Interestingly at subzero temperatures relatively dilute sodium hydroxide solutions can be used to dissolve cellulose to a certain extent. The objective of this work was to investigate the possible factors that govern the solubility of cellulose in aqueous NaOH and the solution stability. Cellulose-NaOH solutions have the tendency to form a gel over time and at elevated temperature, which creates challenges for further processing. The main target of this work was to achieve high solubility of cellulose in aqueous NaOH without excessively compromising the solution stability. In the literature survey an overview of the cellulose dissolution is given and possible factors contributing to the solubility and solution properties of cellulose in aqueous NaOH are reviewed. Furthermore, the concept of solution rheology is discussed. In the experimental part the focus was on the characterization of the used materials and properties of the prepared solutions mainly concentrating on cellulose solubility and solution stability.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
In recent years, technological advancements in microelectronics and sensor technologies have revolutionized the field of electrical engineering. New manufacturing techniques have enabled a higher level of integration that has combined sensors and electronics into compact and inexpensive systems. Previously, the challenge in measurements was to understand the operation of the electronics and sensors, but this has now changed. Nowadays, the challenge in measurement instrumentation lies in mastering the whole system, not just the electronics. To address this issue, this doctoral dissertation studies whether it would be beneficial to consider a measurement system as a whole from the physical phenomena to the digital recording device, where each piece of the measurement system affects the system performance, rather than as a system consisting of small independent parts such as a sensor or an amplifier that could be designed separately. The objective of this doctoral dissertation is to describe in depth the development of the measurement system taking into account the challenges caused by the electrical and mechanical requirements and the measurement environment. The work is done as an empirical case study in two example applications that are both intended for scientific studies. The cases are a light sensitive biological sensor used in imaging and a gas electron multiplier detector for particle physics. The study showed that in these two cases there were a number of different parts of the measurement system that interacted with each other. Without considering these interactions, the reliability of the measurement may be compromised, which may lead to wrong conclusions about the measurement. For this reason it is beneficial to conceptualize the measurement system as a whole from the physical phenomena to the digital recording device where each piece of the measurement system affects the system performance. The results work as examples of how a measurement system can be successfully constructed to support a study of sensors and electronics.
Resumo:
This work investigated the fructooligosaccharides (FOS) synthesis by immobilized inulinase obtained from Kluyveromyces marxianus NRRL Y-7571 in aqueous and aqueous-organic systems using sucrose as substrate. The sequential strategy of experimental design was used to optimize the FOS conversion in both systems. For the aqueous-organic system, a 2(6-2) fractional design was carried out to evaluate the effects of temperature, sucrose concentration, pH, aqueous/organic ratio, enzyme activity, and polyethylene glycol concentration. For the aqueous system, a central composite design for the enzyme activity and the sucrose concentration was carried out. The highest fructooligosaccharides yield (Y FOS) for the aqueous-organic system was 18.2 ± S0.9 wt%, at 40 ºC, pH 5.0, sucrose concentration of 60% (w/w), enzyme activity of 4 U.mL-1, and aqueous/organic ratio of 25/75 wt%. The highest Y FOS for the aqueous system was 14.6 ± 0.9 wt% at 40 ºC, pH 5.0, sucrose concentration of 60 wt%, and enzyme activity of 4.0 U.mL-1.
Resumo:
Many-core systems provide a great potential in application performance with the massively parallel structure. Such systems are currently being integrated into most parts of daily life from high-end server farms to desktop systems, laptops and mobile devices. Yet, these systems are facing increasing challenges such as high temperature causing physical damage, high electrical bills both for servers and individual users, unpleasant noise levels due to active cooling and unrealistic battery drainage in mobile devices; factors caused directly by poor energy efficiency. Power management has traditionally been an area of research providing hardware solutions or runtime power management in the operating system in form of frequency governors. Energy awareness in application software is currently non-existent. This means that applications are not involved in the power management decisions, nor does any interface between the applications and the runtime system to provide such facilities exist. Power management in the operating system is therefore performed purely based on indirect implications of software execution, usually referred to as the workload. It often results in over-allocation of resources, hence power waste. This thesis discusses power management strategies in many-core systems in the form of increasing application software awareness of energy efficiency. The presented approach allows meta-data descriptions in the applications and is manifested in two design recommendations: 1) Energy-aware mapping 2) Energy-aware execution which allow the applications to directly influence the power management decisions. The recommendations eliminate over-allocation of resources and increase the energy efficiency of the computing system. Both recommendations are fully supported in a provided interface in combination with a novel power management runtime system called Bricktop. The work presented in this thesis allows both new- and legacy software to execute with the most energy efficient mapping on a many-core CPU and with the most energy efficient performance level. A set of case study examples demonstrate realworld energy savings in a wide range of applications without performance degradation.
Resumo:
Mobile malwares are increasing with the growing number of Mobile users. Mobile malwares can perform several operations which lead to cybersecurity threats such as, stealing financial or personal information, installing malicious applications, sending premium SMS, creating backdoors, keylogging and crypto-ransomware attacks. Knowing the fact that there are many illegitimate Applications available on the App stores, most of the mobile users remain careless about the security of their Mobile devices and become the potential victim of these threats. Previous studies have shown that not every antivirus is capable of detecting all the threats; due to the fact that Mobile malwares use advance techniques to avoid detection. A Network-based IDS at the operator side will bring an extra layer of security to the subscribers and can detect many advanced threats by analyzing their traffic patterns. Machine Learning(ML) will provide the ability to these systems to detect unknown threats for which signatures are not yet known. This research is focused on the evaluation of Machine Learning classifiers in Network-based Intrusion detection systems for Mobile Networks. In this study, different techniques of Network-based intrusion detection with their advantages, disadvantages and state of the art in Hybrid solutions are discussed. Finally, a ML based NIDS is proposed which will work as a subsystem, to Network-based IDS deployed by Mobile Operators, that can help in detecting unknown threats and reducing false positives. In this research, several ML classifiers were implemented and evaluated. This study is focused on Android-based malwares, as Android is the most popular OS among users, hence most targeted by cyber criminals. Supervised ML algorithms based classifiers were built using the dataset which contained the labeled instances of relevant features. These features were extracted from the traffic generated by samples of several malware families and benign applications. These classifiers were able to detect malicious traffic patterns with the TPR upto 99.6% during Cross-validation test. Also, several experiments were conducted to detect unknown malware traffic and to detect false positives. These classifiers were able to detect unknown threats with the Accuracy of 97.5%. These classifiers could be integrated with current NIDS', which use signatures, statistical or knowledge-based techniques to detect malicious traffic. Technique to integrate the output from ML classifier with traditional NIDS is discussed and proposed for future work.
Resumo:
Electrochromism, the phenomenon of reversible color change induced by a small electric charge, forms the basis for operation of several devices including mirrors, displays and smart windows. Although, the history of electrochromism dates back to the 19th century, only the last quarter of the 20th century has its considerable scientific and technological impact. The commercial applications of electrochromics (ECs) are rather limited, besides top selling EC anti-glare mirrors by Gentex Corporation and airplane windows by Boeing, which made a huge commercial success and exposed the potential of EC materials for future glass industry. It is evident from their patents that viologens (salts of 4,4ʹ-bipyridilium) were the major active EC component for most of these marketed devices, signifying the motivation of this thesis focusing on EC viologens. Among the family of electrochromes, viologens have been utilized in electrochromic devices (ECDs) for a while, due to its intensely colored radical cation formation induced by applying a small cathodic potential. Viologens can be synthesized as oligomer or in the polymeric form or as functionality to conjugated polymers. In this thesis, polyviologens (PVs) were synthesized starting from cyanopyridinium (CNP) based monomer precursors. Reductive coupling of cross-connected cyano groups yields viologen and polyviologen under successive electropolymerization using for example the cyclic voltammetry (CV) technique. For further development, a polyviologen-graphene composite system was fabricated, focusing at the stability of the PV electrochrome without sacrificing its excellent EC properties. High electrical conductivity, high surface area offered by graphene sheets together with its non-covalent interactions and synergism with PV significantly improved the electrochrome durability in the composite matrix. The work thereby continued in developing a CNP functionalized thiophene derivative and its copolymer for possible utilization of viologen in the copolymer blend. Furthermore, the viologen functionalized thiophene derivative was synthesized and electropolymerized in order to explore enhancement in the EC contrast and overall EC performance. The findings suggest that such electroactive viologen/polyviologen systems and their nanostructured composite films as well as viologen functionalized conjugated polymers, can be potentially applied as an active EC material in future ECDs aiming at durable device performances.