973 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
The objective of the research was to understand the success factors of the Danish energy service industry. The research phenomenon was studied greatly but the aim was to examine it from the service logic point of view. The research was threefold and it examined the phenomena from the company, industrial and national levels. The purpose of the multi-level study was to understand all the success factors and to examine how they are combined together. First, the research problem was approached through the literature review. After that, the empirical part of the study was conducted as a case study and the data was collected by theme interviews. The collected data was analyzed through theoretical point of view and compared with earlier studies. This study shows that the most important success factor was the country, because it has affected to the other aspects of the success. Because the actors of the industry are linked together tightly, communication and common understanding of business is essential to the industry success. The new energy technologies do not produce directly added value for the customers. This has sifted energy business towards service business, and the customers have been included in the value creation process.
Resumo:
Extant research on exchange-listed firms has acknowledged that the concentration of ownership and the identity of owners make a difference. In addition, studies indicate that firms with a dominant owner outperform firms with dispersed ownership. During the last few years, scholars have identified one group of owners, in particular, whose ownership stake in publicly listed firm is positively related to performance: the business family. While acknowledging that family firms represent a unique organizational form, scholars have identified various concepts and theories in order to understand how the family influences organizational processes and firm performance. Despite multitude of research, scholars have not been able to present clear results on how firm performance is actually impacted by the family. In other words, studies comparing the performance of listed family and other types of firms have remained descriptive in nature since they lack empirical data and confirmation from the family business representatives. What seems to be missing is a convincing theory that links the involvement and behavioral consequences. Accordingly, scholars have not yet come to a mutual understanding of what precisely constitutes a family business. The variety of different definitions and theories has made comparability of different results difficult for instance. These two issues have hampered the development of a rigorous theory of family business. The overall objective of this study is to describe and understand how the family as a dominant owner can enhance firm performance, and can act a source of sustainable success in listed companies. In more detail, in order to develop understanding of the unique factors that can act as competitive advantages for listed family firms, this study is based on a qualitative approach and aims at theory development, not theory verification. The data in this study consist of 16 thematic interviews with CEOs, members of the board, supervisory board chairs, and founders of Finnish listed-family firms. The study consists of two parts. The first part introduces the research topic, research paradigm, methods, and publications, and also discusses the overall outcomes and contributions of the publications. The second part consists of four publications that address the research questions from different viewpoints. The analyses of this study indicate that family ownership in listed companies represents a structure that differs from the traditional views of agency and stewardship, as well as from resource-based and stakeholder views. As opposed to these theories and shareholder capitalism which consider humans as individualistic, opportunistic, and self-serving, and assume that the behaviors of an investor are based on the incentives and motivations to maximize private profits, the family owners form a collective social unit that is motivated to act together toward their mutual purpose or benefit. In addition, socio-emotional and psychological elements of ownership define the family members as owners, rather than the legal and financial dimensions of ownership. That is, collective psychological ownership of family over the business (F-CPO) can be seen as a construct that comprehensively captures the fusion between the family and the business. Moreover, it captures the realized, rather than merely potential, family influence on and interaction with the business, and thereby brings more theoretical clarity of the nature of the fusion between the family and the business, and offers a solution to the problem of family business definition. This doctoral dissertation provides academics, policy-makers, family business practitioners, and the society at large with many implications considering family and business relationships.
Resumo:
University of Turku, Faculty of Medicine, Department of Cardiology and Cardiovascular Medicine, Doctoral Programme of Clinical Investigation, Heart Center, Turku University Hospital, Turku, Finland Division of Internal Medicine, Department of Cardiology, Seinäjoki Central Hospital, Seinäjoki, Finland Heart Center, Satakunta Central Hospital, Pori, Finland Annales Universitatis Turkuensis Painosalama Oy, Turku, Finland 2015 Antithrombotic therapy during and after coronary procedures always entails the challenging establishment of a balance between bleeding and thrombotic complications. It has been generally recommended to patients on long-term warfarin therapy to discontinue warfarin a few days prior to elective coronary angiography or intervention to prevent bleeding complications. Bridging therapy with heparin is recommended for patients at an increased risk of thromboembolism who require the interruption of anticoagulation for elective surgery or an invasive procedure. In study I, consecutive patients on warfarin therapy referred for diagnostic coronary angiography were compared to control patients with a similar disease presentation without warfarin. The strategy of performing coronary angiography during uninterrupted therapeutic warfarin anticoagulation appeared to be a relatively safe alternative to bridging therapy, if the international normalized ratio level was not on a supratherapeutic level. In-stent restenosis remains an important reason for failure of long-term success after a percutaneous coronary intervention (PCI). Drug-eluting stents (DES) reduce the problem of restenosis inherent to bare metal stents (BMS). However, a longer delay in arterial healing may extend the risk of stent thrombosis (ST) far beyond 30 days after the DES implantation. Early discontinuation of antiplatelet therapy has been the most important predisposing factor for ST. In study II, patients on long-term oral anticoagulant (OAC) underwent DES or BMS stenting with a median of 3.5 years’follow-up. The selective use of DESs with a short triple therapy seemed to be safe in OAC patients, since late STs were rare even without long clopidogrel treatment. Major bleeding and cardiac events were common in this patient group irrespective of stent type. In order to help to predict the bleeding risk in patients on OAC, several different bleeding risk scorings have been developed. Risk scoring systems have also been used also in the setting of patients undergoing a PCI. In study III, the predictive value of an outpatient bleeding risk index (OBRI) to identify patients at high risk of bleeding was analysed. The bleeding risk seemed not to modify periprocedural or long-term treatment choices in patients on OAC after a percutaneous coronary intervention. Patients with a high OBRI often had major bleeding episodes, and the OBRI may be suitable for risk evaluation in this patient group. Optical coherence tomography (OCT) is a novel technology for imaging intravascular coronary arteries. OCT is a light-based imaging modality that enables a 12–18 µm tissue axial resolution to visualize plaques in the vessel, possible dissections and thrombi as well as, stent strut appositions and coverage, and to measure the vessel lumen and lesions. In study IV, 30 days after titanium-nitride-oxide (TITANOX)-coated stent implantation, the binary stent strut coverage was satisfactory and the prevalence of malapposed struts was low as evaluated by OCT. Long-term clinical events in patients treated with (TITANOX)-coated bio-active stents (BAS) and paclitaxel-eluting stents (PES) in routine clinical practice were examined in study V. At the 3-year follow-up, BAS resulted in better long-term outcome when compared with PES with an infrequent need for target vessel revascularization. Keywords: anticoagulation, restenosis, thrombosis, bleeding, optical coherence tomography, titanium
Resumo:
The costs of health care are going up in many countries. In order to provide affordable and effective health care solutions, new technologies and approaches are constantly being developed. In this research, video games are presented as a possible solution to the problem. Video games are fun, and nowadays most people like to spend time on them. In addition, recent studies have pointed out that video games can have notable health benefits. Health games have already been developed, used in practice, and researched. However, the bulk of health game studies have been concerned with the design or the effectiveness of the games; no actual business studies have been conducted on the subject, even though health games often lack commercial success despite their health benefits. This thesis seeks to fill this gap. The specific aim of this thesis is to develop a conceptual business model framework and empirically use it in explorative medical game business model research. In the first stage of this research, a literature review was conducted and the existing literature analyzed and synthesized into a conceptual business model framework consisting of six dimensions. The motivation behind the synthesis is the ongoing ambiguity around the business model concept. In the second stage, 22 semi-structured interviews were conducted with different professionals within the value network for medical games. The business model framework was present in all stages of the empirical research: First, in the data collection stage, the framework acted as a guiding instrument, focusing the interview process. Then, the interviews were coded and analyzed using the framework as a structure. The results were then reported following the structure of the framework. In the results, the interviewees highlighted several important considerations and issues for medical games concerning the six dimensions of the business model framework. Based on the key findings of this research, several key components of business models for medical games were identified and illustrated in a single figure. Furthermore, five notable challenges for business models for medical games were presented, and possible solutions for the challenges were postulated. Theoretically, these findings provide pioneering information on the untouched subject of business models for medical games. Moreover, the conceptual business model framework and its use in the novel context of medical games provide a contribution to the business model literature. Regarding practice, this thesis further accentuates that medical games can offer notable benefits to several stakeholder groups and offers advice to companies seeking to commercialize these games.
Resumo:
Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.
Resumo:
This study examined high school student perceptions of discretion utilized by educators in high school disciplinary proceedings. Using a sample of 6 high school students who had experienced differing levels of formal discipline, the study investigated the discretionary factors that influence an educator's decision making. The study was a generic qualitative study where the primary source of data collection was open-ended interviews to ensure the integrity of the research as a study of student voices and perceptions. Journaling was also employed to record observations and to identify researcher assumptions. The data were analyzed employing aspects of a grounded theory approach. The findings were coded to reveal 5 areas high school students identified in relation to discipline and discretion: punitive discipline versus problem resolution, effective processes, educator discretion, student discretion, and the student-educator relationship. The final discussion highlights the need for a community vision for high school discipline in order to channel discretion and to uphold students' best interests. Restorative justice is proposed as a feasible vision for high school discipline, whereby participants' responses are measured against a restorative paradigm.
Resumo:
Un atout majeur des organisations consiste en leur capacité à créer et exploiter l’information et les connaissances, capacité déterminée entre autres par les comportements informationnels. Chargés de décisions stratégiques, tactiques et opérationnelles, les cadres intermédiaires sont au cœur du processus de création des connaissances, et leurs comportements informationnels doivent être soutenus par des systèmes d’information. Toutefois, leurs comportements informationnels sont peu documentés. La présente recherche porte sur la modélisation des comportements informationnels de cadres intermédiaires d’une organisation municipale. Plus spécifiquement, elle examine comment ces cadres répondent à leurs besoins d’information courante dans le contexte de leurs activités de gestion, c’est-à-dire dans leur environnement d’utilisation d’information. L’étude répond aux questions de recherche suivantes : (1) Quelles sont les situations problématiques auxquelles font face les cadres intermédiaires municipaux ? (2) Quels sont les besoins informationnels exprimés par les cadres intermédiaires municipaux lors de situations problématiques ? (3) Quelles sont les sources d’information qui soutiennent les comportements informationnels des cadres intermédiaires municipaux ? Cette recherche descriptive s’inscrit dans une approche qualitative. Les 21 cadres intermédiaires ayant participé à l’étude proviennent de deux arrondissements d’une municipalité québécoise fusionnée en 2002. Les modes de collecte de données sont l’entrevue en profondeur en personne et l’observation directe auprès de ces cadres, et la collecte de documentation pertinente. L’incident critique est utilisé comme technique de collecte de données et comme unité d’analyse. Les données recueillies font l’objet d’une analyse de contenu qualitative basée sur la théorisation ancrée. Les résultats indiquent que les rôles de gestion proposés dans les écrits pour les cadres supérieurs s’appliquent aussi aux cadres intermédiaires, bien que le rôle conseil ressorte comme étant particulier à ces derniers. Ceux-ci ont des responsabilités de gestion aux trois niveaux d’intervention opérationnel, tactique et stratégique, bien qu’ils œuvrent davantage au plan tactique. Les situations problématiques dont ils sont chargés s’inscrivent dans l’environnement d’utilisation d’information constitué des composantes suivantes : leurs rôles et responsabilités de gestion et le contexte organisationnel propre à une municipalité en transformation. Les cadres intermédiaires ont eu à traiter davantage de situations nouvelles que récurrentes, caractérisées par des sujets portant principalement sur les ressources matérielles et immobilières ou sur des aspects d’intérêt juridique, réglementaire et normatif. Ils ont surtout manifesté des besoins pour de l’information de nature processuelle et contextuelle. Pour y répondre, ils ont consulté davantage de sources verbales que documentaires, même si le nombre de ces dernières reste élevé, et ont préféré utiliser des sources d’information internes. Au plan théorique, le modèle de comportement informationnel proposé pour les cadres intermédiaires municipaux enrichit les principales composantes du modèle général d’utilisation de l’information (Choo, 1998) et du modèle d’environnement d’utilisation d’information (Taylor, 1986, 1991). L’étude permet aussi de préciser les concepts d’« utilisateur » et d’« utilisation de l’information ». Au plan pratique, la recherche permet d’aider à la conception de systèmes de repérage d’information adaptés aux besoins des cadres intermédiaires municipaux, et aide à évaluer l’apport des systèmes d’information archivistiques à la gestion de la mémoire organisationnelle.
Resumo:
Empirical evidence suggests that ambiguity is prevalent in insurance pricing and underwriting, and that often insurers tend to exhibit more ambiguity than the insured individuals (e.g., [23]). Motivated by these findings, we consider a problem of demand for insurance indemnity schedules, where the insurer has ambiguous beliefs about the realizations of the insurable loss, whereas the insured is an expected-utility maximizer. We show that if the ambiguous beliefs of the insurer satisfy a property of compatibility with the non-ambiguous beliefs of the insured, then there exist optimal monotonic indemnity schedules. By virtue of monotonicity, no ex-post moral hazard issues arise at our solutions (e.g., [25]). In addition, in the case where the insurer is either ambiguity-seeking or ambiguity-averse, we show that the problem of determining the optimal indemnity schedule reduces to that of solving an auxiliary problem that is simpler than the original one in that it does not involve ambiguity. Finally, under additional assumptions, we give an explicit characterization of the optimal indemnity schedule for the insured, and we show how our results naturally extend the classical result of Arrow [5] on the optimality of the deductible indemnity schedule.
Resumo:
Empirical evidence suggests that ambiguity is prevalent in insurance pricing and underwriting, and that often insurers tend to exhibit more ambiguity than the insured individuals (e.g., [23]). Motivated by these findings, we consider a problem of demand for insurance indemnity schedules, where the insurer has ambiguous beliefs about the realizations of the insurable loss, whereas the insured is an expected-utility maximizer. We show that if the ambiguous beliefs of the insurer satisfy a property of compatibility with the non-ambiguous beliefs of the insured, then there exist optimal monotonic indemnity schedules. By virtue of monotonicity, no ex-post moral hazard issues arise at our solutions (e.g., [25]). In addition, in the case where the insurer is either ambiguity-seeking or ambiguity-averse, we show that the problem of determining the optimal indemnity schedule reduces to that of solving an auxiliary problem that is simpler than the original one in that it does not involve ambiguity. Finally, under additional assumptions, we give an explicit characterization of the optimal indemnity schedule for the insured, and we show how our results naturally extend the classical result of Arrow [5] on the optimality of the deductible indemnity schedule.
Resumo:
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our vision cache resolution should satisfy the following requirements: (i) it should result in low message overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission range. The proposed approach reduces the number of messages flooded in to the network to find the requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The experimental results gives a promising result based on the metrics of studies.
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
We deal with the numerical solution of heat conduction problems featuring steep gradients. In order to solve the associated partial differential equation a finite volume technique is used and unstructured grids are employed. A discrete maximum principle for triangulations of a Delaunay type is developed. To capture thin boundary layers incorporating steep gradients an anisotropic mesh adaptation technique is implemented. Computational tests are performed for an academic problem where the exact solution is known as well as for a real world problem of a computer simulation of the thermoregulation of premature infants.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples -- in particular the regression problem of approximating a multivariate function from sparse data. We present both formulations in a unified framework, namely in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.