46 resultados para EFFICIENT RED ELECTROLUMINESCENCE
Resumo:
In nature, many animals use body coloration to communicate with each other. For example, colorations can be used as signals between individuals of the same species, but also to recognise individuals of other species, and if they may comprise a threat or not. Many animals use protective coloration to avoid predation. The two most common strategies of protective coloration are camouflage and aposematism. Camouflaged animals have coloration that minimises detection, usually by matching colours or structures in the background. Aposematic animals, on the other hand, signal to predators that they are defended. The defence can be physical structures, such as spikes and hairs, or chemical compounds that make the animal distasteful or even deadly toxic. In order for the warning signal to be effective, the predator has to recognise it as such. Studies have shown that birds for example, that are important visual predators on insects, learn to recognise and avoid unpalatable prey faster if they contrast the background or have large internal contrasts. Typical examples of aposematic species have conspicuous colours like yellow, orange or red, often in combination with black. My thesis focuses on the appearance and function of aposematic colour patterns. Even though researchers have studied aposematism for over a century, there is still a lot we do not know about the phenomenon. For example, as it is crucial that the predators recognise a warning signal, aposematic colorations should assumingly evolve homogeneously and be selected for maximal conspicuousness. Instead, there is an extensive variation of colours and patterns among warning colorations, and it is not uncommon to find typical cryptic colours, such as green and brown in aposematic colour patterns. One hypothesis to this variation is that an aposematic coloration does not have to be maximally signalling in order to be effective, instead it is sufficient to have distinct features that can be easily distinguished from edible prey. To be maximally conspicuous is one way to achieve this, but not the only way. Another hypothesis is that aposematic prey that do not exhibit maximal conspicuousness can exploit both camouflage and aposematism in a distance-dependent fashion, by being signalling when seen close up but camouflaged at a distance. Many prey animals also make use of both strategies by shifting colour at different ecological conditions such as seasonal variations, fluctuations in food resources or between life stages. Yet another explanation for the variation may be that prey animals are usually exposed to several predator species that vary in visual perception and tolerance towards various toxins. The aim with this thesis is, by studying their functions, to understand why aposematic warning signals vary in appearance, specifically in the level of conspicuousness, and if warning coloration can be combined with camouflage. In paper I, I investigated if the colour pattern of the aposematic larva of the Apollo butterfly (Parnassius apollo) can switch function with viewing distance, and be signalling at close range but camouflaged at a distance, by comparing detection time between different colour variants and distances. The results show that the natural coloration has a dual distance-dependent function. Moreover, the study shows that an aposematic coloration does not have to be selected for maximal conspicuousness. A prey animal can optimise its coloration primarily by avoiding detection, but also by investing in a secondary defence, which presence can be signalled if detected. In paper II, I studied how easily detected the coloration of the firebug (Pyrrhocoris apterus), a typical aposematic species, is at different distances against different natural backgrounds, by comparing detection time between different colour variants. Here, I found no distance-dependent switch in function. Instead, the results show that the coloration of the firebug is selected for maximal conspicuousness. One explanation for this is that the firebug is more mobile than the butterfly larva in study I, and movement is often incompatible with efficient camouflage. In paper III, I investigated if a seasonal related colour change in the chemically defended striated shieldbug (Graphosoma lineatum) is an adaptation to optimise a protective coloration by shifting from camouflage to aposematism between two seasons. The results confirm the hypothesis that the coloration expressed in the late summer has a camouflage function, blending in with the background. Further, I investigated if the internal pattern as such increased the effectiveness of the camouflage. Again, the results are in accordance with the hypothesis, as the patterned coloration was more difficult to detect than colorations lacking an internal pattern. This study shows how an aposematic species can optimise its defence by shifting from camouflage to aposematism, but in a different fashion than studied in paper I. The aim with study IV was to study the selection on aposematic signals by identifying characteristics that are common for colorations of aposematic species, and that distinguish them from colorations of other species. I compared contrast, pattern element size and colour proportion between a group of defended species and a group of undefended species. In contrast to my prediction, the results show no significant differences between the two groups in any of the analyses. One explanation for the non-significant results could be that there are no universal characteristics common for aposematic species. Instead, the selection pressures acting on defended species vary, and therefore affect their appearance differently. Another explanation is that all defended species may not have been selected for a conspicuous aposematic warning coloration. Taken together, my thesis shows that having a conspicuous warning coloration is not the only way to be aposematic. Also, aposematism and camouflage is not two mutually exclusive opposites, as there are prey species that exploit both strategies. It is also important to understand that prey animals are exposed to various selection pressures and trade-offs that affect their appearance, and determines what an optimal coloration is for each species or environment. In conclusion, I hold that the variation among warning colorations is larger and coloration properties that have been considered as archetypically aposematic may not be as widespread and representative as previously assumed.
Resumo:
The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.
Resumo:
In recent years, chief information officers (CIOs) around the world have identified Business Intelligence (BI) as their top priority and as the best way to enhance their enterprises competitiveness. Yet, many enterprises are struggling to realize the business value that BI promises. This discrepancy causes important questions, for example: what are the critical success factors of Business Intelligence and, more importantly, how it can be ensured that a Business Intelligence program enhances enterprises competitiveness. The main objective of the study is to find out how it can be ensured that a BI program meets its goals in providing competitive advantage to an enterprise. The objective is approached with a literature review and a qualitative case study. For the literature review the main objective populates three research questions (RQs); RQ1: What is Business Intelligence and why is it important for modern enterprises? RQ2: What are the critical success factors of Business Intelligence programs? RQ3: How it can be ensured that CSFs are met? The qualitative case study covers the BI program of a Finnish global manufacturer company. The research questions for the case study are as follows; RQ4: What is the current state of the case company’s BI program and what are the key areas for improvement? RQ5: In what ways the case company’s Business Intelligence program could be improved? The case company’s BI program is researched using the following methods; action research, semi-structured interviews, maturity assessment and benchmarking. The literature review shows that Business Intelligence is a technology-based information process that contains a series of systematic activities, which are driven by the specific information needs of decision-makers. The objective of BI is to provide accurate, timely, fact-based information, which enables taking actions that lead to achieving competitive advantage. There are many reasons for the importance of Business Intelligence, two of the most important being; 1) It helps to bridge the gap between an enterprise’s current and its desired performance, and 2) It helps enterprises to be in alignment with key performance indicators meaning it helps an enterprise to align towards its key objectives. The literature review also shows that there are known critical success factors (CSFs) for Business Intelligence programs which have to be met if the above mentioned value is wanted to be achieved, for example; committed management support and sponsorship, business-driven development approach and sustainable data quality. The literature review shows that the most common challenges are related to these CSFs and, more importantly, that overcoming these challenges requires a more comprehensive form of BI, called Enterprise Performance Management (EPM). EPM links measurement to strategy by focusing on what is measured and why. The case study shows that many of the challenges faced in the case company’s BI program are related to the above-mentioned CSFs. The main challenges are; lack of support and sponsorship from business, lack of visibility to overall business performance, lack of rigid BI development process, lack of clear purpose for the BI program and poor data quality. To overcome these challenges the case company should define and design an enterprise metrics framework, make sure that BI development requirements are gathered and prioritized by business, focus on data quality and ownership, and finally define clear goals for the BI program and then support and sponsor these goals.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
The driving forces for current research of flame retardants are increased fire safety in combination with flame retardant formulations that fulfill the criteria of sustainable production and products. In recent years, important questions about the environmental safety of antimony, and in particular, brominated flame retardants have been raised. As a consequence of this, the current doctoral thesis work describes efforts to develop new halogen-free flame retardants that are based on various radical generators and phosphorous compounds. The investigation was first focused on compounds that are capable of generating alkyl radicals in order to study their role on flame retardancy of polypropylene. The family of azoalkanes was selected as the cleanest and most convenient source of free alkyl radicals. Therefore, a number of symmetrical and unsymmetrical azoalkanes of the general formula R-N=N-R’ were prepared. The experimental results show that in the series of different sized azocycloalkanes the flame retardant efficacy decreased in the following order: R = R´= cyclohexyl > cyclopentyl > cyclobutyl > cyclooctanyl > cyclododecanyl. However, in the series of aliphatic azoalkanes compounds, the efficacy decreased as followed: R = R´= n-alkyl > tert-butyl > tert-octyl. The most striking difference in flame retardant efficacy was observed in thick polypropylene plaques of 1 mm, e.g. azocyclohexane (AZO) had a much better flame retardant performance than did the commercial reference FR (Flamestab® NOR116) in thick PP sections. In addition, some of the prepared azoalkane flame retardants e.g. 4’4- bis(cyclohexylazocyclohexyl) methane (BISAZO) exhibited non-burning dripping behavior. Extrusion coating experiments of flame retarded low density polyethylene (LDPE) onto a standard machine finished Kraft paper were carried out in order to investigate the potential of azoalkanes in multilayer facings. The results show that azocyclohexane (AZO) and 4’4-bis (cyclohexylazocyclohexyl) methane (BISAZO) can significantly improve the flame retardant properties of low density polyethylene coated paper already at 0.5 wt.% loadings, provided that the maximum extrusion temperature of 260 oC is not exceeded and coating weight is kept low at 13 g/m2. In addition, various triazene-based flame retardants (RN1=N2-N3R’R’’) were prepared. For example, polypropylene samples containing a very low concentration of only 0.5 wt.% of bis- 4’4’-(3’3’-dimethyltriazene) diphenyl ether and other triazenes passed the DIN 4102-1 test with B2 classification. It is noteworthy that no burning dripping could be detected and the average burning times were very short with exceptionally low weight losses. Therefore, triazene compounds constitute a new and interesting family of radical generators for flame retarding of polymeric materials. The high flame retardant potential of triazenes can be attributed to their ability to generate various types of radicals during their thermal decomposition. According to thermogravimetric analysis/Fourier transform infrared spectroscopy/MS analysis, triazene units are homolytically cleaved into various aminyl, resonance-stabilized aryl radicals, and different CH fragments with simultaneous evolution of elemental nitrogen. Furthermore, the potential of thirteen aliphatic, aromatic, thiuram and heterocyclic substituted organic disulfide derivatives of the general formula R-S-S-R’ as a new group of halogen-free flame retardants for polypropylene films have been investigated. According to the DIN 4102- 1 standard ignitibility test, for the first time it has been demonstrated that many of the disulfides alone can effectively provide flame retardancy and self-extinguishing properties to polypropylene films at already very low concentrations of 0.5 wt.%. For the disulfide family, the highest FR activity was recorded for 5’5’-dithiobis (2-nitrobenzoic acid). Very low values for burning length (53 mm) and burning time (10 s) reflect significantly increased fire retardant performance of this disulfide compared to other compounds in this series as well as to Flamestab® NOR116. Finally, two new, phosphorus-based flame retardants were synthesized: P’P-diphenyl phosphinic hydrazide (PAH) and melamine phenyl phosphonate (MPhP). The DIN 4102-1 test and the more stringent UL94 vertical burning test (UL94 V) were used to assess the formulations ability to extinguish a flame once ignited. A very strong synergistic effect with azoalkanes was found, i.e. in combination with these radical generators even UL94 V0 rate could be obtained.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Since the discovery of the up-conversion phenomenon, there has been an ever increasing interest in up-converting phosphors in which the absorption of two or more low energy photons is followed by emission of a higher energy photon. Most up-conversion luminescence materials operate by using a combination of a trivalent rare earth (lanthanide) sensitizer (e.g. Yb or Er) and an activator (e.g. Er, Ho, Tm or Pr) ion in a crystal lattice. Up-converting phosphors have a variety of potential applications as lasers and displays as well as inks for security printing (e.g. bank notes and bonds). One of the most sophisticated applications of lanthanide up-conversion luminescence is probably in medical diagnostics. However, there are some major problems in the use of photoluminescence based on the direct UV excitation in immunoassays. Human blood absorbs strongly UV radiation as well as the emission of the phosphor in the visible. A promising way to overcome the problems arising from the blood absorption is to use a long wavelength excitation and benefit from the up-conversion luminescence. Since there is practically no absorption by the whole-blood in the near IR region, it has no capability for up-conversion in the excitation wavelength region of the conventional up-converting phosphor based on the Yb3+ (sensitizer) and Er3+ (activator) combination. The aim of this work was to prepare nanocrystalline materials with high red (and green) up-conversion luminescence efficiency for use in quantitative whole-blood immunoassays. For coupling to biological compounds, nanometer-sized (crystallite size below 50 nm) up-converting phosphor particles are required. The nanocrystalline ZrO2:Yb3+,Er3+, Y2O2S:Yb3+,Er3+, NaYF4:Yb3+,Er3+ and NaRF4-NaR’F4 (R: Y, Yb, Er) materials, prepared with the combustion, sol-gel, flux, co-precipitation and solvothermal synthesis, were studied using the thermal analysis, FT-IR spectroscopy, transmission electron microscopy, EDX spectroscopy, XANES/EXAFS measurements, absorption spectroscopy, X-ray powder diffraction, as well as up-conversion and thermoluminescence spectroscopies. The effect of the impurities of the phosphors, crystallite size, as well as the crystal structure on the up-conversion luminescence intensity was analyzed. Finally, a new phenomenon, persistent up-conversion luminescence was introduced and discussed. For efficient use in bioassays, more work is needed to yield nanomaterials with smaller and more uniform crystallite sizes. Surface modifications need to be studied to improve the dispersion in water. On the other hand, further work must be carried out to optimize the persistent up-conversion luminescence of the nanomaterials to allow for their use as efficient immunoassay nanomaterials combining the advantages of both up-conversion and persistent luminescence.
Resumo:
In recent years, there have been studies that show a correlation between the hyperactivity of children and use of artificial food additives, including colorants. This has, in part, led to preference of natural products over products with artificial additives. Consumers have also become more aware of health issues. Natural food colorants have many bioactive functions, mainly vitamin A activity of carotenoids and antioxidativity, and therefore they could be more easily accepted by the consumers. However, natural colorant compounds are usually unstable, which restricts their usage. Microencapsulation could be one way to enhance the stability of natural colorant compounds and thus enable better usage for them as food colorants. Microencapsulation is a term used for processes in which the active material is totally enveloped in a coating or capsule, and thus it is separated and protected from the surrounding environment. In addition to protection by the capsule, microencapsulation can also be used to modify solubility and other properties of the encapsulated material, for example, to incorporate fat-soluble compounds into aqueous matrices. The aim of this thesis work was to study the stability of two natural pigments, lutein (carotenoid) and betanin (betalain), and to determine possible ways to enhance their stability with different microencapsulation techniques. Another aim was the extraction of pigments without the use of organic solvents and the development of previously used extraction methods. Stability of pigments in microencapsulated pigment preparations and model foods containing these were studied by measuring the pigment content after storage in different conditions. Preliminary studies on the bioavailability of microencapsulated pigments and sensory evaluation for consumer acceptance of model foods containing microencapsulated pigments were also carried out. Enzyme-assisted oil extraction was used to extract lutein from marigold (Tagetes erecta) flower without organic solvents, and the yield was comparable to solvent extraction of lutein from the same flowers. The effects of temperature, extraction time, and beet:water ratio on extraction efficiency of betanin from red beet (Beta vulgaris) were studied and the optimal conditions for maximum yield and maximum betanin concentration were determined. In both cases, extraction at 40 °C was better than extraction at 80 °C and the extraction for five minutes was as efficient as 15 or 30 minutes. For maximum betanin yield, the beet:water ratio of 1:2 was better, with possibly repeated extraction, but for maximum betanin concentration, a ratio of 1:1 was better. Lutein was incorporated into oil-in-water (o/w) emulsions with a polar oil fraction from oat (Avena sativa) as an emulsifier and mixtures of guar gum and xanthan gum or locust bean gum and xanthan gum as stabilizers to retard creaming. The stability of lutein in these emulsions was quite good, with 77 to 91 percent of lutein being left after storage in the dark at 20 to 22°C for 10 weeks whereas in spray dried emulsions the retention of lutein was 67 to 75 percent. The retention of lutein in oil was also good at 85 percent. Betanin was incorporated into the inner w1 water phase of a water1-in-oil-inwater2 (w1/o/w2) double emulsion with primary w1/o emulsion droplet size of 0.34 μm and secondary w1/o/w2 emulsion droplet size of 5.5 μm and encapsulation efficiency of betanin of 89 percent. In vitro intestinal lipid digestion was performed on the double emulsion, and during the first two hours, coalescence of the inner water phase droplets was observed, and the sizes of the double emulsion droplets increased quickly because of aggregation. This period also corresponded to gradual release of betanin, with a final release of 35 percent. The double emulsion structure was retained throughout the three-hour experiment. Betanin was also spray dried and incorporated into model juices with different pH and dry matter content. Model juices were stored in the dark at -20, 4, 20–24 or 60 °C (accelerated test) for several months. Betanin degraded quite rapidly in all of the samples and higher temperature and a lower pH accelerated degradation. Stability of betanin was much better in the spray dried powder, with practically no degradation during six months of storage in the dark at 20 to 24 °C and good stability also for six months in the dark at 60 °C with 60 percent retention. Consumer acceptance of model juices colored with spray dried betanin was compared with similar model juices colored with anthocyanins or beet extract. Consumers preferred beet extract and anthocyanin colored model juices over juices colored with spray dried betanin. However, spray dried betanin did not impart any off-odors or off-flavors into the model juices contrary to the beet extract. In conclusion, this thesis describes novel solvent-free extraction and encapsulation processes for lutein and betanin from plant sources. Lutein showed good stability in oil and in o/w emulsions, but slightly inferior in spray dried emulsions. In vitro intestinal lipid digestion showed a good stability of w1/o/w2 double emulsion and quite high retention of betanin during digestion. Consumer acceptance of model juices colored with spray dried betanin was not as good as model juices colored with anthocyanins, but addition of betanin to real berry juice could produce better results with mixture of added betanin and natural berry anthocyanins could produce a more acceptable color. Overall, further studies are needed to obtain natural colorants with good stability for the use in food products.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
The aim of this thesis was to create a process for all multi-site ramp-up (MSRU) projects in the case company in order to have simultaneous ramp-ups early in the market. The research was done through case study in one company and semi-structured interviews. There are already processes, which are now in use in MSRU-cases. Interviews of 20 ramp-up specialists revealed topics to be improved. Those were project team set up, roles and responsibilities and recommended project organization, communication, product change management practices, competence and know how transfer practices and support model. More R&D support and involvement is needed in MSRU-projects. DCM’s role is very important in the MSRU-projects among PMT-team; he should be the business owner of the project. Recommendation is that product programs could take care of the product and repair training of new products in volume factories. R&D’s participation in competence transfers is essential important in MSRU-projects. Communication in projects could be shared through special intranet commune. Blogging and tweeting could be considered in the communication plan. If hundreds of change notes are open in ramp-up phase, it should be considered not to approve the product into volume ramp-up. PMTs’ supports are also important and MSRU-projects should be planned, budgeted and executed together. Finally a new MSRU-process is presented in this thesis to be used in all MSRU-projects.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
Kuvitusta satuun Punahilkka.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.