44 resultados para Possible solutions
Resumo:
The aim of this research is to study the boundary zone of home and work and the tensions people experience while reconciling home and work? How are the requirements of the family, the home and the work taken care of in everyday life? What kind of difficulties does the individual experience when reconciling home activities and job requirements together? What kind of activity policies have families created to ease the everyday life? What kind of goals and requirements do families feel behind the difficulties in adjusting home and work? What kind of changes would make the adjusting of home and work easier? The changing family life, everyday home activities and the changing Finnish working life are studied to describe the adjusting of home and work. In addition the boundary zone of home and work and its tensions are studied. 337 research persons who find reconciling home and work challenging were elected from different sectors of the working life. Research persons were gathered from the public, private and third sectors. The research material was gathered with a semi-structured qualitative questionnaire published in internet. Contents analysis was the analysis method of the research material. The tensions of adjusting home and work are various. Several activity systems meet on the boundary zone of home and work causing boundary zones to expand and tensions to increase and expand like a network. In the everyday life of an individual the boundary zone fades out and home and work overlap. Tensions can be examined as internal conflicts of the individual through the activity system of everyday life. Individuals balance between individualism and familism, feeling bad, suffering from lack of time and struggling with childcare organizing problems and inflexible employers. The solutions to reconciling home and work difficulties are situational. Often is the help of family and friends required without any solid solutions. The conflict of the goals, requirements and the reality is behind the problems as well as the tightening terms of the working life and its growing expectations. Change requests are proposed on the levels of individual, home, work and the society. Reconciling home and work is not only a challenge between the employee and the employer. It s a problem that needs multilateral solutions and changes on the levels of individual, home, work and society. The challenge remaining is to find out if it would be successful to take the everyday life as starting point to negotiate the reconciling of home and work and how the possible family, social and work political solutions appear in everyday life.
Resumo:
The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.
Resumo:
The light emitted by flat panel displays (FPD) can be generated in many different ways, such as for example alternating current thin film electroluminescence (ACTFEL), liquid crystal display (LCD), light emitting diode (LED), or plasma display panel (PDP) technologies. In this work, the focus was on ACTFEL devices and the goal was to develop new thin film processes for light emitting materials in ACTFEL devices. The films were deposited with the atomic layer deposition (ALD) method, which has been utilized in the manufacturing of ACTFEL displays since the mid-1980s. The ALD method is based on surface-controlled self-terminated reactions and a maximum of one layer of the desired material can be prepared during one deposition cycle. Therefore, the film thickness can be controlled simply by adjusting the number of deposition cycles. In addition, both large areas and deep trench structures can be covered uniformly. During this work, new ALD processes were developed for the following thin film materials: BaS, CuxS, MnS, PbS, SrS, SrSe, SrTe, SrS1-xSex, ZnS, and ZnS1-xSex. In addition, several ACTFEL devices were prepared where the light emitting material was BaS, SrS, SrS1-xSex, ZnS, or ZnS1-xSex thin film that was doped with Ce, Cu, Eu, Mn, or Pb. The sulfoselenide films were made by substituting the elemental selenium for sulfur on the substrate surface during film deposition. In this way, it was possible to replace a maximum of 90% of the sulfur with selenium, and the XRD analyses indicated that the films were solid solutions. The polycrystalline BaS, SrS, and ZnS thin films were deposited at 180-400, 120-460, and 280-500 °C, respectively, and the processes had a wide temperature range where the growth rate of the films was independent of the deposition temperature. The electroluminescence studies showed that the doped sulfoselenide films resulted in low emission intensity. However, the emission intensities and emission colors of the doped SrS, BaS, and ZnS films were comparable with those found in earlier studies. It was also shown that the electro-optical properties of the different ZnS:Mn devices were different as a consequence of different ZnS:Mn processes. Finally, it was concluded that because the higher deposition temperature seemed to result in a higher emission intensity, the thermal stability of the reactants has a significant role when the light emitting materials of ACTFEL devices are deposited with the ALD method.
Resumo:
Polyethylene is the most widely used synthetic polymer in the world. Most polyethylene is made with Ziegler-Natta catalysts. Polyethylenes for special applications are made with metallocenes, which are nowadays heavily patented. It is laborious therefore, to develop new metallocenes. The aim of this work was to investigate the feasibility of replacing the cyclopentadienyl ligands of metallocenes by aminopyridinato ligands without losing the good properties of the metallocenes, such as high activity and formation of linear polymer. The subject was approached by studying what kind of catalysts the metallocenes are and how they catalyze polyethylene. The polymerization behavior of metallocenes was examined by synthesizing a piperazino substituted indenyl zirconocene catalyst and comparing its polymerization data with that of the indenyl zirconocene catalyst. On the basis of their isolobality, it was thought that aminopyridinato ligands might replace cyclopentadienyl ligands. It was presumed that the polymerization mechanism and the active center in ethylene polymerization would be similar for aminopyridinato and metallocene catalysts. Titanium aminopyridinato complexes were prepared and their structures determined to clarify the relationship between structure of the catalyst precursor and polymerization results. The ethylene polymerization results for titanium 2-phenylaminopyridinato catalysts and titanocene catalysts were compared.
Resumo:
The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
Wireless technologies are continuously evolving. Second generation cellular networks have gained worldwide acceptance. Wireless LANs are commonly deployed in corporations or university campuses, and their diffusion in public hotspots is growing. Third generation cellular systems are yet to affirm everywhere; still, there is an impressive amount of research ongoing for deploying beyond 3G systems. These new wireless technologies combine the characteristics of WLAN based and cellular networks to provide increased bandwidth. The common direction where all the efforts in wireless technologies are headed is towards an IP-based communication. Telephony services have been the killer application for cellular systems; their evolution to packet-switched networks is a natural path. Effective IP telephony signaling protocols, such as the Session Initiation Protocol (SIP) and the H 323 protocol are needed to establish IP-based telephony sessions. However, IP telephony is just one service example of IP-based communication. IP-based multimedia sessions are expected to become popular and offer a wider range of communication capabilities than pure telephony. In order to conjoin the advances of the future wireless technologies with the potential of IP-based multimedia communication, the next step would be to obtain ubiquitous communication capabilities. According to this vision, people must be able to communicate also when no support from an infrastructured network is available, needed or desired. In order to achieve ubiquitous communication, end devices must integrate all the capabilities necessary for IP-based distributed and decentralized communication. Such capabilities are currently missing. For example, it is not possible to utilize native IP telephony signaling protocols in a totally decentralized way. This dissertation presents a solution for deploying the SIP protocol in a decentralized fashion without support of infrastructure servers. The proposed solution is mainly designed to fit the needs of decentralized mobile environments, and can be applied to small scale ad-hoc networks or also bigger networks with hundreds of nodes. A framework allowing discovery of SIP users in ad-hoc networks and the establishment of SIP sessions among them, in a fully distributed and secure way, is described and evaluated. Security support allows ad-hoc users to authenticate the sender of a message, and to verify the integrity of a received message. The distributed session management framework has been extended in order to achieve interoperability with the Internet, and the native Internet applications. With limited extensions to the SIP protocol, we have designed and experimentally validated a SIP gateway allowing SIP signaling between ad-hoc networks with private addressing space and native SIP applications in the Internet. The design is completed by an application level relay that permits instant messaging sessions to be established in heterogeneous environments. The resulting framework constitutes a flexible and effective approach for the pervasive deployment of real time applications.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Cell transition data is obtained from a cellular phone that switches its current serving cell tower. The data consists of a sequence of transition events, which are pairs of cell identifiers and transition times. The focus of this thesis is applying data mining methods to such data, developing new algorithms, and extracting knowledge that will be a solid foundation on which to build location-aware applications. In addition to a thorough exploration of the features of the data, the tools and methods developed in this thesis provide solutions to three distinct research problems. First, we develop clustering algorithms that produce a reliable mapping between cell transitions and physical locations observed by users of mobile devices. The main clustering algorithm operates in online fashion, and we consider also a number of offline clustering methods for comparison. Second, we define the concept of significant locations, known as bases, and give an online algorithm for determining them. Finally, we consider the task of predicting the movement of the user, based on historical data. We develop a prediction algorithm that considers paths of movement in their entirety, instead of just the most recent movement history. All of the presented methods are evaluated with a significant body of real cell transition data, collected from about one hundred different individuals. The algorithms developed in this thesis are designed to be implemented on a mobile device, and require no extra hardware sensors or network infrastructure. By not relying on external services and keeping the user information as much as possible on the user s own personal device, we avoid privacy issues and let the users control the disclosure of their location information.
Resumo:
Usability testing is a productive and reliable method for evaluating the usability of software. Planning and implementing the test and analyzing its results is typically considered time-consuming, whereas applying usability methods in general is considered difficult. Because of this, usability testing is often priorized lower than more concrete issues in software engineering projects. Intranet Alma is a web service, users of which consist of students and personnel of the University of Helsinki. Alma was published in 2004 at the opening ceremony of the university. It has 45 000 users, and it replaces several former university network services. In this thesis, the usability of intranet Alma is evaluated with usability testing. The testing method applied has been lightened to make its taking into use as easy as possible. In the test, six students each tried to solve nine test tasks with Alma. As a result concrete usability problems were described in the final test report. Goal-orientation was given less importance in the applied usability testing. In addition, the system was tested only with test users from the largest user group. Usability test found general usability problems that occurred no matter the task or the user. However, further evaluation needs to be done: in addition to the general usability problems, there are task-dependent problems, solving of which requires thorough gathering of users goals. In the basic structure and central functionality of Alma, for example in navigation, there are serious and often repeating usability problems. It would be of interest to verify the designed user interface solutions to these problems before taking them into use. In the long run, the goals of the users, that the software is planned to support, are worth gathering, and the software development should be based on these goals.
Resumo:
With the recent increase in interest in service-oriented architectures (SOA) and Web services, developing applications with the Web services paradigm has become feasible. Web services are self-describing, platform-independent computational elements. New applications can be assembled from a set of previously created Web services, which are composed together to make a service that uses its components to perform a certain task. This is the idea of service composition. To bring service composition to a mobile phone, I have created Interactive Service Composer for mobile phones. With Interactive Service Composer, the user is able to build service compositions on his mobile phone, consisting of Web services or services that are available from the mobile phone itself. The service compositions are reusable and can be saved in the phone's memory. Previously saved compositions can also be used in new compositions. While developing applications for mobile phones has been possible for some time, the usability of the solutions is not the same as when developing for desktop computers. When developing for mobile phones, the developer has to more carefully consider the decisions he is going to make with the program he is developing. With the lack of processing power and memory, the applications cannot function as well as on desktop PCs. On the other hand, this does not remove the appeal of developing applications for mobile devices.
Resumo:
Certain software products employing digital techniques for encryption of data are subject to export controls in the EU Member States pursuant to Community law and relevant laws in the Member States. These controls are agreed globally in the framework of the so-called Wassenaar Arrangement. Wassenaar is an informal non-proliferation regime aimed at promoting international stability and responsibility in transfers of strategic (dual-use) products and technology. This thesis covers provisions of Wassenaar, Community export control laws and export control laws of Finland, Sweden, Germany, France and United Kingdom. This thesis consists of five chapters. The first chapter discusses the ratio of export control laws and the impact they have on global trade. The ratio is originally defence-related - in general to prevent potential adversaries of participating States from having the same tools, and in particular in the case of cryptographic software to enable signals intelligence efforts. Increasingly as the use of cryptography in a civilian context has mushroomed, export restrictions can have negative effects on civilian trade. Information security solutions may also be took weak because of export restrictions on cryptography. The second chapter covers the OECD's Cryptography Policy, which had a significant effect on its member nations' national cryptography policies and legislation. The OECD is a significant organization,because it acts as a meeting forum for most important industrialized nations. The third chapter covers the Wassenaar Arrangement. The Arrangement is covered from the viewpoint of international law and politics. The Wassenaar control list provisions affecting cryptographic software transfers are also covered in detail. Control lists in the EU and in Member States are usually directly copied from Wassenaar control lists. Controls agreed in its framework set only a minimum level for participating States. However, Wassenaar countries can adopt stricter controls. The fourth chapter covers Community export control law. Export controls are viewed in Community law as falling within the domain of Common Commercial Policy pursuant to Article 133 of the EC Treaty. Therefore the Community has exclusive competence in export matters, save where a national measure is authorized by the Community or falls under foreign or security policy derogations established in Community law. The Member States still have a considerable amount of power in the domain of Common Foreign and Security Policy. They are able to maintain national export controls because export control laws are not fully harmonized. This can also have possible detrimental effects on the functioning of internal market and common export policies. In 1995 the EU adopted Dual-Use Regulation 3381/94/EC, which sets common rules for exports in Member States. Provisions of this regulation receive detailed coverage in this chapter. The fifth chapter covers national legislation and export authorization practices in five different Member States - in Finland, Sweden, Germany, France and in United Kingdom. Export control laws of those Member States are covered when the national laws differ from the uniform approach of the Community's acquis communautaire. Keywords: export control, encryption, software, dual-use, license, foreign trade, e-commerce, Internet