932 resultados para technology-based enterprise
Resumo:
Speed, uncertainty and complexity are increasing in the business world all the time. When knowledge and skills become quickly irrelevant, new challenges are set for information technology (IT) education. Meta-learning skills – learning how to learn rapidly - and innovation skills have become more essential than single technologies or other specific issues. The drastic changes in the information and communications technology (ICT) sector have caused a need to reconsider how IT Bachelor education in Universities of Applied Sciences should be organized and employed to cope with the change. The objective of the study was to evaluate how a new approach to IT Bachelor education, the ICT entrepreneurship study path (ICT-ESP) fits IT Bachelor education in a Finnish University of Applied Sciences. This kind of educational arrangement has not been employed elsewhere in the context of IT Bachelor education. The study presents the results of a four-year period during which IT Bachelor education was renewed in a Finnish University of Applied Sciences. The learning environment was organized into an ICT-ESP based on Nonaka’s knowledge theory and Kolb’s experiental learning. The IT students who studied in the ICT-ESP established a cooperative and learned ICT by running their cooperative at the University of Applied Sciences. The students (called team entrepreneurs) studied by reading theory in books and other sources of explicit information, doing projects for their customers, and reflecting in training sessions on what was learnt by doing and by studying the literature. Action research was used as the research strategy in this study. Empirical data was collected via theme-based interviews, direct observation, and participative observation. Grounded theory method was utilized in the data analysis and the theoretical sampling was used to guide the data collection. The context of the University of Applied Sciences provided a good basis for fostering team entrepreneurship. However, the results showed that the employment of the ICT-ESP did not fit into the IT Bachelor education well enough. The ICT-ESP was cognitively too tough for the team entrepreneurs because they had two different set of rules to follow in their studies. The conventional courses engaged lot of energy which should have been spent for professional development in the ICT-ESP. The amount of competencies needed in the ICT-ESP for professional development was greater than those needed for any other ways of studying. The team entrepreneurs needed to develop skills in ICT, leadership and self-leadership, team development and entrepreneurship skills. The entrepreneurship skills included skills on marketing and sales, brand development, productization, and business administration. Considering the three-year time the team entrepreneurs spent in the ICT-ESP, the challenges were remarkable. Changes to the organization of IT Bachelor education are also suggested in the study. At first, it should be admitted that the ICT-ESP produces IT Bachelors with a different set of competencies compared to the conventional way of educating IT Bachelors. Secondly, the number of courses on general topics in mathematics, physics, and languages for team entrepreneurs studying in the ICTESP should be reconsidered and the conventional course-based teaching of the topics should be reorganized to support the team coaching process of the team entrepreneurs with their practiceoriented projects. Third, the upcoming team entrepreneurs should be equipped with relevant information about the ICT-ESP and what it would require in practice to study as a team entrepreneur. Finally, the upcoming team entrepreneurs should be carefully selected before they start in the ICT-ESP to have a possibility to eliminate solo players and those who have a too romantic view of being a team entrepreneur. The results gained in the study provided answers to the original research questions and the objectives of the study were met. Even though the IT degree programme was terminated during the research process, the amount of qualitative data gathered made it possible to justify the interpretations done.
Resumo:
The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
The most common reason for a low-voltage induction motor breakdown is a bearing failure. Along with the increasing popularity of modern frequency converters, bearing failures have become the most important motor fault type. Conditions in which bearing currents are likely to occur are generated as a side effect of fast du/dt switching transients. Once present, different types of bearing currents can accelerate the mechanical wear of bearings by causing deformation of metal parts in the bearing and degradation of the lubricating oil properties.The bearing current phenomena are well known, and several bearing current measurement and mitigation methods have been proposed. Nevertheless, in order to develop more feasible methods to measure and mitigate bearing currents, better knowledge of the phenomena is required. When mechanical wear is caused by bearing currents, the resulting aging impact has to be monitored and dealt with. Moreover, because of the stepwise aging mechanism, periodically executed condition monitoring measurements have been found ineffective. Thus, there is a need for feasible bearing current measurement methods that can be applied in parallel with the normal operation of series production drive systems. In order to reach the objectives of feasibility and applicability, nonintrusive measurement methods are preferred. In this doctoral dissertation, the characteristics and conditions of bearings that are related to the occurrence of different kinds of bearing currents are studied. Further, the study introduces some nonintrusive radio-frequency-signal-based approaches to detect and measure parameters that are associated with the accelerated bearing wear caused by bearing currents.
Resumo:
This thesis is devoted to the study of the hyperfine properties in iron-based superconductors and the synthesis of these compounds and related phases. During this work polycrystalline chalcogenide samples with stoichiometry 1:1 (FeTe1-χSχ, FeSe1-x) and pnictide samples with stoichiometry 1:2:2 (BaFe2(As1-χPχ)2, EuFe2(As1-x Px)2) were synthesized by solid-state reaction methods in vacuum and in a protecting Ar atmosphere. In several cases post-annealing in oxygen atmosphere was employed. The purity and superconducting properties of the obtained samples were checked with X-ray diffraction, SQUID and resistivity measurements. For studies of the magnetic properties of the investigated samples Mössbauer spectroscopy was used. Using low-temperature measurements around Tc and various values of the source velocity the hyperfine interactions were obtained and the magnetic and structural properties in the normal and superconducting states could be studied. Mössbauer measurements together with XRD characterization were also used for the detection of impurity phases. DFT calculations were used for the theoretical study of Mössbauer parameters for pnictide-based ᴻsamples BaFe2(As1-xPx)2 and EuFe2(As1-xPx)2.
Resumo:
Virtual environments and real-time simulators (VERS) are becoming more and more important tools in research and development (R&D) process of non-road mobile machinery (NRMM). The virtual prototyping techniques enable faster and more cost-efficient development of machines compared to use of real life prototypes. High energy efficiency has become an important topic in the world of NRMM because of environmental and economic demands. The objective of this thesis is to develop VERS based methods for research and development of NRMM. A process using VERS for assessing effects of human operators on the life-cycle efficiency of NRMM was developed. Human in the loop simulations are ran using an underground mining loader to study the developed process. The simulations were ran in the virtual environment of the Laboratory of Intelligent Machines of Lappeenranta University of Technology. A physically adequate real-time simulation model of NRMM was shown to be reliable and cost effective in testing of hardware components by the means of hardware-in-the-loop (HIL) simulations. A control interface connecting integrated electro-hydraulic energy converter (IEHEC) with virtual simulation model of log crane was developed. IEHEC consists of a hydraulic pump-motor and an integrated electrical permanent magnet synchronous motorgenerator. The results show that state of the art real-time NRMM simulators are capable to solve factors related to energy consumption and productivity of the NRMM. A significant variation between the test drivers is found. The results show that VERS can be used for assessing human effects on the life-cycle efficiency of NRMM. HIL simulation responses compared to that achieved with conventional simulation method demonstrate the advances and drawbacks of various possible interfaces between the simulator and hardware part of the system under study. Novel ideas for arranging the interface are successfully tested and compared with the more traditional one. The proposed process for assessing the effects of operators on the life-cycle efficiency will be applied for wider group of operators in the future. Driving styles of the operators can be analysed statistically from sufficient large result data. The statistical analysis can find the most life-cycle efficient driving style for the specific environment and machinery. The proposed control interface for HIL simulation need to be further studied. The robustness and the adaptation of the interface in different situations must be verified. The future work will also include studying the suitability of the IEHEC for different working machines using the proposed HIL simulation method.
Resumo:
The aim of the work presented in this study was to demonstrate the wide applicability of a single-label quenching resonance energy transfer (QRET) assay based on time-resolved lanthanide luminescence. QRET technology is proximity dependent method utilizing weak and unspecific interaction between soluble quencher molecule and lanthanide chelate. The interaction between quencher and chelate is lost when the ligand binds to its target molecule. The properties of QRET technology are especially useful in high throughput screening (HTS) assays. At the beginning of this study, only end-point type QRET technology was available. To enable efficient study of enzymatic reactions, the QRET technology was further developed to enable measurement of reaction kinetics. This was performed using proteindeoxyribonuclei acid (DNA) interaction as a first tool to monitor reaction kinetics. Later, the QRET was used to study nucleotide exchange reaction kinetics and mutation induced effects to the small GTPase activity. Small GTPases act as a molecular switch shifting between active GTP bound and inactive GDP bound conformation. The possibility of monitoring reaction kinetics using the QRET technology was evaluated using two homogeneous assays: a direct growth factor detection assay and a nucleotide exchange monitoring assay with small GTPases. To complete the list, a heterogeneous assay for monitoring GTP hydrolysis using small GTPases, was developed. All these small GTPase assays could be performed using nanomolar protein concentrations without GTPase pretreatment. The results from these studies demonstrated that QRET technology can be used to monitor reaction kinetics and further enable the possibility to use the same method for screening.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Edible films based on gluten from four types of Brazilian wheat gluten (2 "semi-hard" and 2 "soft") were prepared and mechanical and barrier properties were compared with those of wheat gluten films with vital gluten. Water vapor, oxygen permeability, tensile strength and percent elongation at break, solubility in water and surface morphology were measured. The films from "semi-hard" wheat flours showed similar water vapor permeability and solubility in water to films from vital gluten and better tensile strength than the films from "soft" and vital gluten. The films from vital gluten had higher elongation at break and oxygen permeability and also lower solubility in water than the films from the Brazilian wheat "soft" flours. In spite of the vital gluten showed greater mechanical resistance, desirable for the bakery products, for the purpose of developing gluten films Brazilian "semi-hard" wheat flours can be used instead of vital gluten, since they showed similar barrier and mechanical properties.
Resumo:
Moisture equilibrium data of mango pulp were determined using the static gravimetric method. Adsorption and desorption isotherms were obtained in the range of 30-70 ºC, to water activities (a w) from 0.02 to 0.97. The application of the GAB model to the experimental results, using direct nonlinear regression analysis, provided agreement between experimental and calculated values. The net isosteric heat of sorption was estimated from equilibrium sorption data, using the Clausius-Clapeyron equation. Isosteric heats of sorption were found to increase with increasing temperature and could be well adjusted by an exponential relationship. The enthalpy-entropy compensation theory was applied to sorption isotherms and plots of deltaH versus deltaS provided the isokinetic temperatures, indicating an enthalpy controlled sorption process.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Imagine the potential implications of an organization whose business and IT processes are well aligned and are capable of reactively and proactively responding to the external and internal changes. The Philips IT Infrastructure and Operations department (I&O) is undergoing a series of transformation activities to help Philips business keeping up with the changes. I&O would serve a critical function in any business sectors; given that the I&O’s strategy switched from “design, build and run” to “specify, acquire and performance manage”, that function is amplified. In 2013, I&O’s biggest transforming programme I&O Futures engaged multiple interdisciplinary departments and programs on decommissioning legacy processes and restructuring new processes with respect to the Information Technology Internet Library (ITIL), helping I&O to achieve a common infrastructure and operating platform (CI&OP). The author joined I&O Futures in the early 2014 and contributed to the CI&OP release 1, during which a designed model Bing Box and its evaluations were conducted through the lens of six sigma’s structured define-measure-analyze-improve-control (DMAIC) improvement approach. This Bing Box model was intended to firstly combine business and IT principles, namely Lean IT, Agile, ITIL best practices, and Aspect-oriented programming (AOP) into a framework. Secondly, the author implemented the modularized optimization cycles according to the defined framework into Philips’ ITIL-based processes and, subsequently, to enhance business process performance as well as to increase efficiency of the optimization cycles. The unique of this thesis is that the Bing Box model not only provided comprehensive optimization approaches and principles for business process performance, but also integrated and standardized optimization modules for the optimization process itself. The research followed a design research guideline that seek to extend the boundaries of human and organizational capabilities by creating new and innovative artifacts. The Chapter 2 firstly reviewed the current research on Lean Six Sigma, Agile, AOP and ITIL, aiming at identifying the broad conceptual bases for this study. In Chapter 3, we included the process of constructing the Bing Box model. The Chapter 4 described the adoption of Bing Box model: two-implementation case validated by stakeholders through observations and interviews. Chapter 5 contained the concluding remarks, the limitation of this research work and the future research areas. Chapter 6 provided the references used in this thesis.
Resumo:
The purpose of this study is to investigate whether commercial Kraft lignin can be treated with pulsed corona discharge apparatus so that it becomes active. Active lignin refers to the kind of lignin that can be precipitated on the surface of a fiber by lowering the pH. A secondary agenda here is to remove the pungent smell of kraft lignin, which is caused by organically bound sulfur. It is expected that the study will identify mild processing conditions and parameters for achievement of the desired outcome. In the literature review, the properties of lignin are explained, as is their impact on any further processing. In addition, a number of processes are described for the oxidation of lignin in a variety of applications. In the experimental part of the study, test runs were conducted to determine the effects of oxygen supply and pulse frequency on oxidation results, where the purpose is to produce reactive lignin and to find a process that is feasible at an industrial scale. Based on the reported experiments, lignin could not be made active or precipitated to the surface of the fiber. Actual changes in the structure of lignin were not observed, but the pungent smell of lignin was removed. The exact reason for this change could not be established because sulfur NMR analysis did not work for the lignin samples.