942 resultados para unprofessional conduct
Resumo:
This article deals with a simulation-based Study of the impact of projectiles on thin aluminium plates using LS-DYNA by modelling plates with shell elements and projectiles with solid elements. In order to establish the required modelling criterion in terms of element size for aluminium plates, a convergence Study of residual velocity has been carried Out by varying mesh density in the impact zone. Using the preferred material and meshing criteria arrived at here, extremely good prediction of test residual velocities and ballistic limits given by Gupta et al. (2001) for thin aluminium plates has been obtained. The simulation-based pattern of failure with localized bulging and jagged edge of perforation is similar to the perforation with petalling seen in tests. A number Of simulation-based parametric studies have been carried out and results consistent with published test data have been obtained. Despite the robust correlation achieved against published experimental results, it would be prudent to conduct one's own experiments, for a final correlation via the present modelling procedure and analysis with the explicit LS-DYNTA 970 solver. Hence, a sophisticated ballistic impact testing facility and a high-speed camera have been used to conduct additional tests on grade 1100 aluminium plates of 1 mm thickness with projectiles Of four different nose shapes. Finally, using the developed numerical simulation procedure, an excellent correlation of residual velocity and failure modes with the corresponding test results has been obtained.
Resumo:
This research applied Greenhalgh et al's (2005) organisational change theoretical framework and a case study method approach to explore the process of implementing a smoking cessation intervention for pregnant women. The study was carried out according to the principles laid down in the National statement on ethical conduct in human research, produced by the National Health and Medical Research Council, Australia. Ethical approval for the research was sought and received from Queensland University of Technology human research ethics committee, prior to the start of the study. The sample constituted four participants who had been involved in the process of disseminating a training programme for midwives to implement a smoking cessation intervention. Eight semi-structured interviews were undertaken with these participants and the interviews and background programme data were subjected to theoretical analysis. The data were analysed through the lens of the Greenhalgh et al (2005) framework. The result was a disaggregation and (re)aggregation of data in the formation of an analytical outcome (Charmaz, 2006).
Resumo:
Complaints and disciplinary processes play a significant role in health professional regulation. Many countries are transitioning from models of self-regulation to greater external oversight through systems including meta regulation, responsive (risk–based) regulation, and “networked governance”. Such systems harness, in differing ways, public, private, professional and non-governmental bodies to exert influence over the conduct of health professionals and services. Interesting literature is emerging regarding complainants’ motivations and experiences, the impact of complaints processes on health professionals and identification of features such as complainant and health professional profiles, types of complaints and outcomes. This paper concentrates on studies identifying vulnerable groups and their participation in health care regulatory systems.
Resumo:
Reactive Pulsed Laser Deposition is a single step process wherein the ablated elemental metal reacts with a low pressure ambient gas to form a compound. We report here a Secondary Ion Mass Spectrometry based analytical methodology to conduct minimum number of experiments to arrive at optimal process parameters to obtain high quality TiN thin film. Quality of these films was confirmed by electron microscopic analysis. This methodology can be extended for optimization of other process parameters and materials. (C) 2009 Elsevier B.V. All rights reserved.
Cultures of sharing in 3D printing: What can we learn from the licence choices of Thingiverse users?
Resumo:
This article contributes to the discussion by analysing how users of the leading online 3D printing design repository Thingiverse manage their intellectual property (IP). 3D printing represents a fruitful case study for exploring the relationship between IP norms and practitioner culture. Although additive manufacturing technology has existed for decades, 3D printing is on the cusp of a breakout into the technological mainstream – hardware prices are falling; designs are circulating widely; consumer-friendly platforms are multiplying; and technological literacy is rising. Analysing metadata from more than 68,000 Thingiverse design files collected from the site, we examine the licensing choices made by users and explore the way this shapes the sharing practices of the site’s users. We also consider how these choices and practices connect with wider attitudes towards sharing and intellectual property in 3D printing communities. A particular focus of the article is how Thingiverse structures its regulatory framework to avoid IP liability, and the extent to which this may have a bearing on users’ conduct. The paper has three sections. First, we will offer a description of Thingiverse and how it operates in the 3D printing ecosystem, noting the legal issues that have arisen regarding Thingiverse’s Terms of Use and its allocation of intellectual property rights. Different types of Thingiverse licences will be detailed and explained. Second, the empirical metadata we have collected from Thingiverse will be presented, including the methods used to obtain this information. Third, we will present findings from this data on licence choice and the public availability of user designs. Fourth, we will look at the implications of these findings and our conclusions regarding the particular kind of sharing ethic that is present in Thingiverse; we also consider the “closed” aspects of this community and what this means for current debates about “open” innovation.
Resumo:
Purpose In the oncology population where malnutrition prevalence is high, more descriptive screening tools can provide further information to assist triaging and capture acute change. The Patient-Generated Subjective Global Assessment Short Form (PG-SGA SF) is a component of a nutritional assessment tool which could be used for descriptive nutrition screening. The purpose of this study was to conduct a secondary analysis of nutrition screening and assessment data to identify the most relevant information contributing to the PG-SGA SF to identify malnutrition risk with high sensitivity and specificity. Methods This was an observational, cross-sectional study of 300 consecutive adult patients receiving ambulatory anti-cancer treatment at an Australian tertiary hospital. Anthropometric and patient descriptive data were collected. The scored PG-SGA generated a score for nutritional risk (PG-SGA SF) and a global rating for nutrition status. Receiver operating characteristic curves (ROC) were generated to determine optimal cut-off scores for combinations of the PG-SGA SF boxes with the greatest sensitivity and specificity for predicting malnutrition according to scored PG-SGA global rating. Results The additive scores of boxes 1–3 had the highest sensitivity (90.2 %) while maintaining satisfactory specificity (67.5 %) and demonstrating high diagnostic value (AUC = 0.85, 95 % CI = 0.81–0.89). The inclusion of box 4 (PG-SGA SF) did not add further value as a screening tool (AUC = 0.85, 95 % CI = 0.80–0.89; sensitivity 80.4 %; specificity 72.3 %). Conclusions The validity of the PG-SGA SF in chemotherapy outpatients was confirmed. The present study however demonstrated that the functional capacity question (box 4) does not improve the overall discriminatory value of the PG-SGA SF.
Resumo:
Aerosol particles play a role in the earth ecosystem and affect human health. A significant pathway of producing aerosol particles in the atmosphere is new particle formation, where condensable vapours nucleate and these newly formed clusters grow by condensation and coagulation. However, this phenomenon is still not fully understood. This thesis brings an insight to new particle formation from an experimental point of view. Laboratory experiments were conducted both on the nucleation process and physicochemical properties related to new particle formation. Nucleation rate measurements are used to test nucleation theories. These theories, in turn, are used to predict nucleation rates in atmospheric conditions. However, the nucleation rate measurements have proven quite difficult to conduct, as different devices can yield nucleation rates with differences of several orders of magnitude for the same substances. In this thesis, work has been done to have a greater understanding in nucleation measurements, especially those conducted in a laminar flow diffusion chamber. Systematic studies of nucleation were also made for future verification of nucleation theories. Surface tensions and densities of substances related to atmospheric new particle formation were measured. Ternary sulphuric acid + ammonia + water is a proposed candidate to participate in atmospheric nucleation. Surface tensions of an alternative candidate to nucleate in boreal forest areas, sulphuric acid + dimethylamine + water, were also measured. Binary compounds, consisting of organic acids + water are possible candidates to participate in the early growth of freshly nucleated particles. All the measured surface tensions and densities were fitted with equations, thermodynamically consistent if possible, to be easily applied to atmospheric model calculations of nucleation and subsequent evolution of particle size.
Resumo:
In common law jurisdictions such as England, Australia, Canada and New Zealand good faith in contracting has long been recognised in specific areas of the law such as insurance law and franchising, and more recently the implied duties of good faith and mutual trust and convenience in employment contracts have generated a considerable volume of case law. Outside of these areas of law that may be characterised as being strongly‘relational’ in character,the courts in common law jurisdictions have been reluctant to embrace a more universal application of good faith in contracting and performance. However increasingly there are cases which support the proposition that there is a common law duty of good faith of general application to all commercial contracts. Most important in this context is the recent decision of the Supreme Court of Canada in Bhasin v Hrynew.1 However, this matter is by no means resolved in all common law jurisdictions. This article looks at the recent case law and literature and at various legislative incursions including statutes, codes of conduct and regulations impacting good faith in commercial dealings.
Resumo:
Stress- and strain-controlled tests of heat treated high-strength rail steel (Australian Standard AS1085.1) have been performed in order to improve the characterisation of the said material׳s ratcheting and fatigue wear behaviour. The hardness of the rail head material has also been studied and it has been found that hardness reduces considerably below four-millimetres from the rail top surface. Historically, researchers have used test coupons with circular cross-sections to conduct cyclic load tests. Such test coupons, typically five-millimetres in gauge diameter and ten‐millimetres in grip diameter, are usually taken from the rail head sample. When there is considerable variation of material properties over the cross-section it becomes likely that localised properties of the rail material will be missed. In another case from the literature, disks 47 mm in diameter for a twin-disk rolling contact test machine were obtained directly from the rail sample and used to validate ratcheting and rolling contact fatigue wear models. The question arises: How accurate are such tests, especially when large material property gradients exist? In this research paper, the effects of rail sampling location on the ratcheting behaviour of AS1085.1 rail steel were investigated using rectangular-shaped specimens obtained at four different depths to observe their respective cyclic plasticity behaviour. The microstructural features of the test coupons were also analysed, especially the pearlite inter-lamellar spacing which showed strong correlation with both hardness and cyclic plasticity behaviour of the material. This work ultimately provides new data and testing methodology to aid the selection of valid parameters for material constitutive models to better understand rail surface ratcheting and wear.
Resumo:
Do SMEs cluster around different types of innovation activities? Are there patterns of SME innovation activities? To investigate we develop a taxonomy of innovation activities in SMEs using a qualitative study, followed by a survey. First, based upon our qualitative research and literature review we develop a comprehensive list of innovation activities SMEs typically engage in. We then conduct a factor analysis to determine if these activities can be combined into factors. We identify three innovation activity factors: R&D activities, incremental innovation activities and cost innovation activities. We use these factors to identify three clusters of firms engaging in similar innovation activities: active innovators, incremental innovators and opportunistic innovators. The clusters are enriched by validating that they also exhibit significant internal similarities and external differences in their innovation skills, demographics, industry segments and family business ownership. This research contributes to innovation and SME theory and practice by identifying SME clusters based upon their innovation activities.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.
Resumo:
Perhaps the most fundamental prediction of financial theory is that the expected returns on financial assets are determined by the amount of risk contained in their payoffs. Assets with a riskier payoff pattern should provide higher expected returns than assets that are otherwise similar but provide payoffs that contain less risk. Financial theory also predicts that not all types of risks should be compensated with higher expected returns. It is well-known that the asset-specific risk can be diversified away, whereas the systematic component of risk that affects all assets remains even in large portfolios. Thus, the asset-specific risk that the investor can easily get rid of by diversification should not lead to higher expected returns, and only the shared movement of individual asset returns – the sensitivity of these assets to a set of systematic risk factors – should matter for asset pricing. It is within this framework that this thesis is situated. The first essay proposes a new systematic risk factor, hypothesized to be correlated with changes in investor risk aversion, which manages to explain a large fraction of the return variation in the cross-section of stock returns. The second and third essays investigate the pricing of asset-specific risk, uncorrelated with commonly used risk factors, in the cross-section of stock returns. The three essays mentioned above use stock market data from the U.S. The fourth essay presents a new total return stock market index for the Finnish stock market beginning from the opening of the Helsinki Stock Exchange in 1912 and ending in 1969 when other total return indices become available. Because a total return stock market index for the period prior to 1970 has not been available before, academics and stock market participants have not known the historical return that stock market investors in Finland could have achieved on their investments. The new stock market index presented in essay 4 makes it possible, for the first time, to calculate the historical average return on the Finnish stock market and to conduct further studies that require long time-series of data.
Resumo:
Perhaps the most fundamental prediction of financial theory is that the expected returns on financial assets are determined by the amount of risk contained in their payoffs. Assets with a riskier payoff pattern should provide higher expected returns than assets that are otherwise similar but provide payoffs that contain less risk. Financial theory also predicts that not all types of risks should be compensated with higher expected returns. It is well-known that the asset-specific risk can be diversified away, whereas the systematic component of risk that affects all assets remains even in large portfolios. Thus, the asset-specific risk that the investor can easily get rid of by diversification should not lead to higher expected returns, and only the shared movement of individual asset returns – the sensitivity of these assets to a set of systematic risk factors – should matter for asset pricing. It is within this framework that this thesis is situated. The first essay proposes a new systematic risk factor, hypothesized to be correlated with changes in investor risk aversion, which manages to explain a large fraction of the return variation in the cross-section of stock returns. The second and third essays investigate the pricing of asset-specific risk, uncorrelated with commonly used risk factors, in the cross-section of stock returns. The three essays mentioned above use stock market data from the U.S. The fourth essay presents a new total return stock market index for the Finnish stock market beginning from the opening of the Helsinki Stock Exchange in 1912 and ending in 1969 when other total return indices become available. Because a total return stock market index for the period prior to 1970 has not been available before, academics and stock market participants have not known the historical return that stock market investors in Finland could have achieved on their investments. The new stock market index presented in essay 4 makes it possible, for the first time, to calculate the historical average return on the Finnish stock market and to conduct further studies that require long time-series of data.
Resumo:
Today finite element method is a well established tool in engineering analysis and design. Though there axe many two and three dimensional finite elements available, it is rare that a single element performs satisfactorily in majority of practical problems. The present work deals with the development of 4-node quadrilateral element using extended Lagrange interpolation functions. The classical univariate Lagrange interpolation is well developed for 1-D and is used for obtaining shape functions. We propose a new approach to extend the Lagrange interpolation to several variables. When variables axe more than one the method also gives the set of feasible bubble functions. We use the two to generate shape function for the 4-node arbitrary quadrilateral. It will require the incorporation of the condition of rigid body motion, constant strain and Navier equation by imposing necessary constraints. The procedure obviates the need for isoparametric transformation since interpolation functions are generated for arbitrary quadrilateral shapes. While generating the element stiffness matrix, integration can be carried out to the accuracy desired by dividing the quadrilateral into triangles. To validate the performance of the element which we call EXLQUAD4, we conduct several pathological tests available in the literature. EXLQUAD4 predicts both stresses and displacements accurately at every point in the element in all the constant stress fields. In tests involving higher order stress fields the element is assured to converge in the limit of discretisation. A method thus becomes available to generate shape functions directly for arbitrary quadrilateral. The method is applicable also for hexahedra. The approach should find use for development of finite elements for use with other field equations also.