948 resultados para semi-empirical methods
Resumo:
Decarbonization of maritime transport requires immediate action. In the short term, ship weather routing can provide greenhouse gas emission reductions, even for existing ships and without retrofitting them. Weather routing is based on making optimal use of both envi- ronmental information and knowledge about vessel seakeeping and performance. Combining them at a state-of-the-art level and making use of path planning in realistic conditions can be challenging. To address these topics in an open-source framework, this thesis led to the development of a new module called bateau , and to its combination with the ship routing model VISIR. bateau includes both hull geometry and propulsion modelling for various vessel types. It has two objectives: to predict the sustained speed in a seaway and to estimate the CO2 emission rate during the voyage. Various semi-empirical approaches were used in bateau to predict the ship hydro- and aerodynamical resistance in both head and oblique seas. Assuming that the ship sails at a constant engine load, the involuntary speed loss due to waves was estimated. This thesis also attempted to clarify the role played by the actual representation of the sea state. In particular, the influence of the wave steepness parameter was assessed. For dealing with ships with a greater superstructure, the wind added resistance was also estimated. Numerical experiments via bateau were conducted for both a medium and a large-size container ships, a bulk-carrier, and a tanker. The simulations of optimal routes were carried out for a feeder containership during voyages in the North Indian Ocean and in the South China Sea. Least-CO2 routes were compared to the least-distance ones, assessing the relative CO2 savings. Analysis fields from the Copernicus Marine Service were used in the numerical experiments.
Resumo:
The scope of the thesis is to broaden the knowledge about axially loaded pipe piles, that can play as foundations for offshore wind turbines based on jacket structures. The goal of the work was pursued by interpreting experimental data on large-scale model piles and by developing numerical tools for the prediction of their monotonic response to tensile and compressive loads to failure. The availability of experimental results on large scale model piles produced in two different campaigns at Fraunhofer IWES (Hannover, Germany) represented the reference for the whole work. Data from CPTs, blow counts during installation and load-displacement curves allowed to develop considerations on the experimental results and comparison with empirical methods from literature, such as CPT-based methods and Load Transfer methods. The understanding of soil-structure interaction mechanisms has been involved in the study in order to better assess the mechanical response of the sand with the scope to help in developing predictive tools of the experiments. A lack of information on the response of Rohsand 3152 when in contact with steel was highlighted, so the necessity of better assessing its response was fulfilled with a comprehensive campaign of interface shear test. It was found how the response of the sand to ultimate conditions evolve with the roughness of the steel, which is a precious information to take account of when attempting the prediction of a pile capacity. Parallel to this topic, the work has developed a numerical modelling procedure that was validated on the available large-scale model piles at IWES. The modelling strategy is intended to build a FE model whose mechanical properties of the sand come from an interpretation of commonly available geotechnical tests. The results of the FE model were compared with other predictive tools currently used in the engineering practice.
Resumo:
This chapter examines the cross-cultural influence of training on the adjustment of international assignees. We focus on the pre-departure training (PDT) before an international assignment. It is an important topic because in the globalized world of today more and more expatriations are needed. The absence of PDT may generate the failure of the expatriation experience. Companies may neglect PDT due to cost reduction practices and ignorance of the need for it. Data were collected through semi-structured interviews to 42 Portuguese international assignees and 18 organizational representatives from nine Portuguese companies. The results suggest that companies should develop PDT programs, particularly when the cultural distance to the host country is bigger and when there is no previous experience of expatriation to that country in the company. The study is original because it details in depth the methods of PDT, its problems, and consequences. Some limitations linked to the research design and detailed in the conclusion should be overcome in future studies.
Resumo:
This paper surveys recent evidence on the determinants of (national and/or foreign) industrial location. We find that the basic analytical framework has remained essentially unaltered since the early contributions of the early 1980's while, in contrast, there have been significant advances in the quality of the data and, to a lesser extent, the econometric modelling. We also identify certain determinants (neoclassical and institutional factors) that tend to provide largely consistent results across the reviewed studies. In light of this evidence, we finally suggest future lines of research.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this slight, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this light, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
The paper draws from three case studies of regional construction firms operating in the UK. The case studies provide new insights into the ways in which such firms strive to remain competitive. Empirical data was derived from multiple interactions with senior personnel from with each firm. Data collection methods included semi-structured interviews, informal interactions, archival research, and workshops. The initial research question was informed by existing resource-based theories of competitiveness and an extensive review of constructionspecific literature. However, subsequent emergent empirical findings progressively pointed towards the need to mobilise alternative theoretical models that emphasise localised learning and embeddedness. The findings point towards the importance of de-centralised structures that enable multiple business units to become embedded within localised markets. A significant degree of autonomy is essential to facilitate entrepreneurial behaviour. In essence, sustained competitiveness was found to rest on the way de-centralised business units enact ongoing processes of localised learning. Once local business units have become embedded within localised markets, the essential challenge is how to encourage continued entrepreneurial behaviour while maintaining some degree of centralised control and coordination. This presents a number of tensions and challenges which play out differently across each of the three case studies.
Resumo:
The objective of this book is to present the quantitative techniques that are commonly employed in empirical finance research together with real world, state of the art research examples. Each chapter is written by international experts in their fields. The unique approach is to describe a question or issue in finance and then to demonstrate the methodologies that may be used to solve it. All of the techniques described are used to address real problems rather than being presented for their own sake and the areas of application have been carefully selected so that a broad range of methodological approaches can be covered. This book is aimed primarily at doctoral researchers and academics who are engaged in conducting original empirical research in finance. In addition, the book will be useful to researchers in the financial markets and also advanced Masters-level students who are writing dissertations.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
Previous research has shown multiple benefits and challenges with the incorporation of children’s literature in the English as a Second language (ESL) classroom. In addition, the use of children’s literature in the lower elementary English classroom is recommended by the Swedish National Agency for Education. Consequently, the current study explores how teachers in Swedish elementary school teach ESL through children’s literature. This empirical study involves English teachers from seven schools in a small municipality in Sweden. The data has been collected through an Internet survey. The study also connects the results to previous international research, comparing Swedish and international research. The results suggest that even though there are many benefits of using children’s literature in the ESL classroom, the respondents seldom use these authentic texts, due to limited time and a narrow supply of literature, among other factors. However, despite these challenges, all of the teachers claim to use children’s literature by reading aloud in the classroom. Based on the results, further research exploring pupils’ thoughts in contrast to teachers would be beneficial. In addition, the majority of the participants expressed that they wanted more information on how to use children’s literature. Therefore, additional research relating to beneficial methods of teaching English through children’s literature, especially in Sweden, is recommended.
Resumo:
Semiquantitative (Maki) and quantitative (Brun- Buisson) culture techniques were employed in the diagnosis of catheter-related bloodstream infections (CRBSI) in patients who have a short-term central venous catheter (inserted for 30 days). The diagnosis of CRBSI was based on the results of semiquantitative and quantitative culture of material from the removed catheters. Catheter tips (118) from 100 patients were evaluated by both methods. Semiquantitative analysis revealed 34 catheters (28.8%) colonized by ≥15 colonyforming units (cfu), while quantitative cultures (34 catheters, 28.8%) showed the growth of ≥103 cfu/mL. Bacteremia was confirmed in four patients by isolating microorganisms of identical species from both catheters and blood samples. Using the semiquantitative culture technique on short-term central venous catheter tips, we have shown that with a cut-off level of ≥15 cfu, the technique had 100.0% sensitivity, specificity of 68.4%, 25.0% positive predictive value (PPV) and 100.0% negative predictive value (NPV), efficiency of 71.4% and a prevalence of 9.5%. The quantitative method, with a cut-off limit of ≥103 cfu/mL, gave identical values: the sensitivity was 100.0%, specificity 68.4%, positive predictive value (PPV) 25.0%, negative predictive value (NPV) 100.0%, efficiency 71.4% and prevalence 9.5%. We concluded that the semiquantitative and quantitative culture methods, evaluated in parallel, for the first time in Brazil, have similar sensitivity and specificity. Keywords: central venous catheter; semi-quantitative culture; quantitative culture; catheter-related bacteremia.
Resumo:
The physico-chemical characterization, structure-pharmacokinetic and metabolism studies of new semi synthetic analogues of natural bile acids (BAs) drug candidates have been performed. Recent studies discovered a role of BAs as agonists of FXR and TGR5 receptor, thus opening new therapeutic target for the treatment of liver diseases or metabolic disorders. Up to twenty new semisynthetic analogues have been synthesized and studied in order to find promising novel drugs candidates. In order to define the BAs structure-activity relationship, their main physico-chemical properties (solubility, detergency, lipophilicity and affinity with serum albumin) have been measured with validated analytical methodologies. Their metabolism and biodistribution has been studied in “bile fistula rat”, model where each BA is acutely administered through duodenal and femoral infusion and bile collected at different time interval allowing to define the relationship between structure and intestinal absorption and hepatic uptake ,metabolism and systemic spill-over. One of the studied analogues, 6α-ethyl-3α7α-dihydroxy-5β-cholanic acid, analogue of CDCA (INT 747, Obeticholic Acid (OCA)), recently under approval for the treatment of cholestatic liver diseases, requires additional studies to ensure its safety and lack of toxicity when administered to patients with a strong liver impairment. For this purpose, CCl4 inhalation to rat causing hepatic decompensation (cirrhosis) animal model has been developed and used to define the difference of OCA biodistribution in respect to control animals trying to define whether peripheral tissues might be also exposed as a result of toxic plasma levels of OCA, evaluating also the endogenous BAs biodistribution. An accurate and sensitive HPLC-ES-MS/MS method is developed to identify and quantify all BAs in biological matrices (bile, plasma, urine, liver, kidney, intestinal content and tissue) for which a sample pretreatment have been optimized.