973 resultados para Empirical Bayes Methods
Resumo:
OBJECTIVE: To investigate the prevalence of discontinuation and nonpublication of surgical versus medical randomized controlled trials (RCTs) and to explore risk factors for discontinuation and nonpublication of surgical RCTs. BACKGROUND: Trial discontinuation has significant scientific, ethical, and economic implications. To date, the prevalence of discontinuation of surgical RCTs is unknown. METHODS: All RCT protocols approved between 2000 and 2003 by 6 ethics committees in Canada, Germany, and Switzerland were screened. Baseline characteristics were collected and, if published, full reports retrieved. Risk factors for early discontinuation for slow recruitment and nonpublication were explored using multivariable logistic regression analyses. RESULTS: In total, 863 RCT protocols involving adult patients were identified, 127 in surgery (15%) and 736 in medicine (85%). Surgical trials were discontinued for any reason more often than medical trials [43% vs 27%, risk difference 16% (95% confidence interval [CI]: 5%-26%); P = 0.001] and more often discontinued for slow recruitment [18% vs 11%, risk difference 8% (95% CI: 0.1%-16%); P = 0.020]. The percentage of trials not published as full journal article was similar in surgical and medical trials (44% vs 40%, risk difference 4% (95% CI: -5% to 14%); P = 0.373). Discontinuation of surgical trials was a strong risk factor for nonpublication (odds ratio = 4.18, 95% CI: 1.45-12.06; P = 0.008). CONCLUSIONS: Discontinuation and nonpublication rates were substantial in surgical RCTs and trial discontinuation was strongly associated with nonpublication. These findings need to be taken into account when interpreting surgical literature. Surgical trialists should consider feasibility studies before embarking on full-scale trials.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Hotels and second home rentals are two of the most important tourist accommodation options in Spain. In terms of seasonality, almost all previous studies have analysed tourism demand from the point of view either of total arrivals or the number of tourists lodged in a single accommodation type (hotels, rural accommodation, etc). However, there are no studies focusing on price seasonality orcomparing seasonality among different accommodation types. By using seasonality indicators and a price index constructed by means of hedonic methods, this paper aims to shed some light on seasonal pricing patterns among second home rentals and hotels. The paper relies on a 2004 database of 144 hotels and 1,002 apartments on the Costa Brava (northeast Spain). The results show that prices for second home rentals display a smoother seasonal pattern than hotels due to reduced price differences between shoulder (May and October) and peak periods (August)
Resumo:
The purpose of this study was to define the customer profitability of the case company as well as to specify the factors that explain customer profitability. The study was made with a quantitative research method. The research hypotheses were formulated mainly on the grounds of previous research, and were tested with statistical research methods. The research results showed that customer profitability is not equally distributed among the customers of the case company, and the majority of its customers is profitable. The interpreters for absolute customer profitability were sales volume and the customer’s location region. The interpreters for relative customer profitability were the customer’s location region and the product segment into which a customer can be classified on the basis of the products that were sold to this customer.
Resumo:
Fatal and permanently disabling accidents form only one per I cent of all occupational accidents but in many branches of industry they account for more than half the accident costs. Furthermore the human suffering of the victim and his family is greater in severe accidents than in slight ones. For both human and economic reasons the severe accident risks should be identified befor injuries occur. It is for this purpose that different safety analysis methods have been developed . This study shows two new possible approaches to the problem.. The first is the hypothesis that it is possible to estimate the potential severity of accidents independent of the actual severity. The second is the hypothesis that when workers are also asked to report near accidents, they are particularly prone to report potentially severe near accidents on the basis of their own subjective risk assessment. A field study was carried out in a steel factory. The results supported both the hypotheses. The reliability and the validity of post incident estimates of an accident's potential severity were reasonable. About 10 % of accidents were estimated to be potentially critical; they could have led to death or very severe permanent disability. Reported near accidents were significantly more severe, about 60 $ of them were estimated to be critical. Furthermore the validity of workers subjective risk assessment, manifested in the near accident reports, proved to be reasonable. The studied new methods require further development and testing. They could be used both in routine usage in work places and in research for identifying and setting the priorities of accident risks.
Resumo:
Mixed methods research involves the combined use of quantitative and qualitative methods in the same research study, and it is becoming increasingly important in several scientific areas. The aim of this paper is to review and compare through a mixed methods multiple-case study the application of this methodology in three reputable behavioural science journals: the Journal of Organizational Behavior, Addictive Behaviors and Psicothema. A quantitative analysis was carried out to review all the papers published in these journals during the period 2003-2008 and classify them into two blocks: theoretical and empirical, with the latter being further subdivided into three subtypes (quantitative, qualitative and mixed). A qualitative analysis determined the main characteristics of the mixed methods studies identified, in order to describe in more detail the ways in which the two methods are combined based on their purpose, priority, implementation and research design. From the journals selected, a total of 1.958 articles were analysed, the majority of which corresponded to empirical studies, with only a small number referring to research that used mixed methods. Nonetheless, mixed methods research does appear in all the behavioural science journals studied within the period selected, showing a range of designs, where the sequential equal weight mixed methods research design seems to stand out.
Resumo:
La tècnica de l’electroencefalograma (EEG) és una de les tècniques més utilitzades per estudiar el cervell. En aquesta tècnica s’enregistren els senyals elèctrics que es produeixen en el còrtex humà a través d’elèctrodes col•locats al cap. Aquesta tècnica, però, presenta algunes limitacions a l’hora de realitzar els enregistraments, la principal limitació es coneix com a artefactes, que són senyals indesitjats que es mesclen amb els senyals EEG. L’objectiu d’aquest treball de final de màster és presentar tres nous mètodes de neteja d’artefactes que poden ser aplicats en EEG. Aquests estan basats en l’aplicació de la Multivariate Empirical Mode Decomposition, que és una nova tècnica utilitzada per al processament de senyal. Els mètodes de neteja proposats s’apliquen a dades EEG simulades que contenen artefactes (pestanyeigs), i un cop s’han aplicat els procediments de neteja es comparen amb dades EEG que no tenen pestanyeigs, per comprovar quina millora presenten. Posteriorment, dos dels tres mètodes de neteja proposats s’apliquen sobre dades EEG reals. Les conclusions que s’han extret del treball són que dos dels nous procediments de neteja proposats es poden utilitzar per realitzar el preprocessament de dades reals per eliminar pestanyeigs.
Resumo:
Energy industry has gone through major changes globally in past two decades. Liberalization of energy markets has led companies to integrate both vertically and horizontally. Growing concern on sustainable development and aims to decrease greenhouse gases in future will increase the portion of renewable energy in total energy production. Purpose of this study was to analyze using statistical methods, what impacts different strategic choices has on biggest European and North American energy companies’ performance. Results show that vertical integration, horizontal integration and use of renewable energy in production had the most impact on profitability. Increase in level of vertical integration decreased companies’ profitability, while increase in horizontal integration improved companies’ profitability. Companies that used renewable energy in production were less profitable than companies not using renewable energy.
Resumo:
The objective of this case study is to provide a Finnish solution provider company an objective, in-depth analysis of their project based business and especially of project estimation accuracy. A project and customer profitability analysis is conducted as a complementary addition to describe profitability of the Case Company’s core division. The theoretical framework is constructed on project profitability and customer profitability analysis. Project profitability is approached starting from managing projects, continuing to project pricing process and concluding to project success. The empirical part of this study describes the Case Company’s project portfolio, and by means of quantitative analysis, the study describes how the characteristics of a project impact the project’s profitability. The findings indicate that it really makes a difference in project portfolio’s estimated and actual profitability when methods of installation and technical specifications are scrutinized. Implications on profitability are gathered into a risk assessment tool proposal.
Resumo:
Credit risk assessment is an integral part of banking. Credit risk means that the return will not materialise in case the customer fails to fulfil its obligations. Thus a key component of banking is setting acceptance criteria for granting loans. Theoretical part of the study focuses on key components of credit assessment methods of Banks in the literature when extending credits to large corporations. Main component is Basel II Accord, which sets regulatory requirement for credit risk assessment methods of banks. Empirical part comprises, as primary source, analysis of major Nordic banks’ annual reports and risk management reports. As secondary source complimentary interviews were carried out with senior credit risk assessment personnel. The findings indicate that all major Nordic banks are using combination of quantitative and qualitative information in credit risk assessment model when extending credits to large corporations. The relative input of qualitative information depends on the selected approach to the credit rating, i.e. point-in-time or through-the-cycle.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
The aim of this master’s thesis is to study how Agile method (Scrum) and open source software are utilized to produce software for a flagship product in a complex production environment. The empirical case and the used artefacts are taken from the Nokia MeeGo N9 product program, and from the related software program, called as the Harmattan. The single research case is analysed by using a qualitative method. The Grounded Theory principles are utilized, first, to find out all the related concepts from artefacts. Second, these concepts are analysed, and finally categorized to a core category and six supported categories. The result is formulated as the operation of software practices conceivable in circumstances, where the accountable software development teams and related context accepts a open source software nature as a part of business vision and the whole organization supports the Agile methods.
Resumo:
There are vast changes in the work environment, and the traditional rules and management methods might not be suitable for today’s employees anymore. The meaning of work is also changing due to the younger and higher educated generations entering the markets. Old customs need to be re-validated and new approaches should be taken into use. This paper strongly emphasizes the importance of happiness research and happiness at work. The values towards the meaning of work are changing; people demand happiness and quality from all aspects of their lives. The aim of this study is to define happiness - especially at work - and to explain how it can be measured and what kind of results achieved. I also want to find out how the contents of work and the working environment might enhance happiness. The correlation between education and happiness is discussed and examined. I am aware that the findings and theories are concentrating mainly on Western Countries and highlighting the values and work-environments of those societies. The main aim of the empirical study is to find out if there are connections between happiness and work in data collected by World Value Survey in 2005, and if the profession has effects on happiness. Other factors such as the correlation of age, sex, education and income are examined too. I also want to find out what kind of values people have towards work and how these affect the happiness levels. The focus is on two nations: Finland (N=1014) and Italy (N=1012). I have also taken the global comparison within, that is all 54 countries (N=66,566) included in the 5th wave (during the years 2005 -2008) of the World Value Survey. The results suggest that people are generally happy around the world; happiness decreasing with the age, the educated being happier than the uneducated and the employed happier than the unemployed. People working in neat “white collar” jobs are more likely happier than those working in factories or outdoors. Money makes us happier, until certain level is reached. Work is important to people and the importance of work adds happiness. Work is also highly appreciated, but there are more happy people among those who do not appreciate work that highly. Safety matters the most when looking for a job, and there are more happy people among those who have selected the importance of work as the first choice when looking for a job, than among those to whom an income is the most important aspect. People are more likely happy when the quality of work is high, that is when their job consists of creative and cognitive tasks and when they have a feeling of independence.
Resumo:
Corporate events as an effective part of marketing communications strategy seem to be underestimated in Finnish companies. In the rest of the Europe and the USA, investments in events are increasing, and their share of the marketing budget is significant. The growth of the industry may be explained by the numerous advantages and opportunities that events provide for attendees, such as face-to-face marketing, enhancing corporate image, building relationships, increasing sales, and gathering information. In order to maximize these benefits and return on investment, specific measurement strategies are required, yet there seems to exist a lack of understanding of how event performance should be perceived or evaluated. To address this research gap, this research attempts to describe the perceptions of and strategies for evaluating corporate event performance in the Finnish events industry. First, corporate events are discussed in terms of definitions and characteristics, typologies, and their role in marketing communications. Second, different theories on evaluating corporate event performance are presented and analyzed. Third, a conceptual model is presented based on the literature review, which serves as a basis for the empirical research conducted as an online questionnaire. The empirical findings are to a great extent in line with the existing literature, suggesting that there remains a lack of understanding corporate event performance evaluation, and challenges arise in determining appropriate measurement procedures for it. Setting clear objectives for events is a significant aspect of the evaluation process, since the outcomes of events are usually evaluated against the preset objectives. The respondent companies utilize many of the individual techniques that were recognized in theory, such as calculating the number of sales leads and delegates. However, some of the measurement tools may require further investments and resources, thus restricting their application especially in smaller companies. In addition, there seems to be a lack of knowledge of the most appropriate methods in different contexts, which take into account the characteristics of the organizing party as well as the size and nature of the event. The lack of inhouse expertise enhances the need for third-party service-providers in solving problems of corporate event measurement.