241 resultados para SIZE DEPENDENCE
Resumo:
Atheromatous plaque rupture h the cause of the majority of strokes and heart attacks in the developed world. The role of calcium deposits and their contribution to plaque vulnerability are controversial. Some studies have suggested that calcified plaque tends to be more stable whereas others have suggested the opposite. This study uses a finite element model to evaluate the effect of calcium deposits on the stress within the fibrous cap by varying their location and size. Plaque fibrous cap, lipid pool and calcification were modeled as hyperelastic, Isotropic, (nearly) incompressible materials with different properties for large deformation analysis by assigning time-dependent pressure loading on the lumen wall. The stress and strain contours were illustrated for each condition for comparison. Von Mises stress only increases up to 1.5% when varying the location of calcification in the lipid pool distant to the fibrous cap. Calcification in the fibrous cap leads to a 43% increase of Von Mises stress when compared with that in the lipid pool. An increase of 100% of calcification area leads to a 15% stress increase in the fibrous cap. Calcification in the lipid pool does not increase fibrous cap stress when it is distant to the fibrous cap, whilst large areas of calcification close to or in the fibrous cap may lead to a high stress concentration within the fibrous cap, which may cause plaque rupture. This study highlights the application of a computational model on a simulation of clinical problems, and it may provide insights into the mechanism of plaque rupture.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.
Resumo:
Rail track undergoes complex loading patterns under moving traffic conditions compared to roads due to its continued and discontinued multi-layered structure, including rail, sleepers, ballast layer, sub-ballast layer, and subgrade. Particle size distributions (PSDs) of ballast, subballast, and subgrade layers can be critical in cyclic plastic deformation of rail track under moving traffic on frequent track degradation of rail tracks, especially at bridge transition zones. Conventional test approaches: static shear and cyclic single-point load tests are however unable to replicate actual loading patterns of moving train. Multi-ring shear apparatus; a new type of torsional simple shear apparatus, which can reproduce moving traffic conditions, was used in this study to investigate influence of particle size distribution of rail track layers on cyclic plastic deformation. Three particle size distributions, using glass beads were examined under different loading patterns: cyclic sin-gle-point load, and cyclic moving wheel load to evaluate cyclic plastic deformation of rail track under different loading methods. The results of these tests suggest that particle size distributions of rail track structural layers have significant impacts on cyclic plastic deformation under moving train load. Further, the limitations in con-ventional test methods used in laboratories to estimate the plastic deformation of rail track materials lead to underestimate the plastic deformation of rail tracks.
Resumo:
Recent years have witnessed burgeoning interest in the line managers' contribution to HRM effectiveness. This effort requires organizations to consider important contextual conditions to ensure the desired organizational outcomes. This paper explores the significance of the organization size in understanding the line managers' involvement in HRM activities. Two case studies were conducted, one in a large and another in a small airport involving key members of the airport management who were closely related to the line managers' HRM role. Content analysis was employed to analyze data from the interviews and written documents. While there were many similarities in the line managers' HRM role, the differences in the line managers' HRM role expectations are also found to be related to differences in the size of the organization. More responsibility is expected from line managers in the large airport as compared to the small airport. This finding has important implications in aligning the HRM strategy and organizational outcomes through the line management contribution.
Resumo:
Variation in personality traits is 30-60% attributed to genetic influences. Attempts to unravel these genetic influences at the molecular level have, so far, been inconclusive. We performed the first genome-wide association study of Cloninger's temperament scales in a sample of 5117 individuals, in order to identify common genetic variants underlying variation in personality. Participants' scores on Harm Avoidance, Novelty Seeking, Reward Dependence, and Persistence were tested for association with 1,252,387 genetic markers. We also performed gene-based association tests and biological pathway analyses. No genetic variants that significantly contribute to personality variation were identified, while our sample provides over 90% power to detect variants that explain only 1% of the trait variance. This indicates that individual common genetic variants of this size or greater do not contribute to personality trait variation, which has important implications regarding the genetic architecture of personality and the evolutionary mechanisms by which heritable variation is maintained.
Resumo:
In this paper, we examine approaches to estimate a Bayesian mixture model at both single and multiple time points for a sample of actual and simulated aerosol particle size distribution (PSD) data. For estimation of a mixture model at a single time point, we use Reversible Jump Markov Chain Monte Carlo (RJMCMC) to estimate mixture model parameters including the number of components which is assumed to be unknown. We compare the results of this approach to a commonly used estimation method in the aerosol physics literature. As PSD data is often measured over time, often at small time intervals, we also examine the use of an informative prior for estimation of the mixture parameters which takes into account the correlated nature of the parameters. The Bayesian mixture model offers a promising approach, providing advantages both in estimation and inference.
Resumo:
Small, not-for-profit organisations fulfil a need in the economy that is typically not satisfied by for-profit firms. They also operate in ways that are distinct from larger organisations. While such firms employ a substantial proportion of the workforce, research addressing human resource management (HRM) practices in these settings is limited. This article used data collected from five small not-for-profit firms in Australia to examine the way one significant HRM practice – the provision and utilisation of flexible work arrangements – operates in the sector. Drawing on research from several scholarly fields, the article firstly develops a framework comprising three tensions in not-for-profits that have implications for HRM. These tensions are: (1) contradictions between an informal approach to HRM vs. a formal regulatory system; (2) employee values that favour social justice vs. external market forces; and (3) a commitment to service vs. external financial expectations. The article then empirically examines how these tensions are managed in relation to the specific case of flexible work arrangements. The study reveals that tensions around providing and accessing flexible work arrangements are managed in three ways: discretion, leadership style and distancing. These findings more broadly inform the way HRM is operationalised in this under-examined sector.
Resumo:
In the field of workplace air quality, measuring and analyzing the size distribution of airborne particles to identify their sources and apportion their contribution has become widely accepted, however, the driving factors that influence this parameter, particularly for nanoparticles (< 100 nm), have not been thoroughly determined. Identification of driving factors, and in turn, general trends in size distribution of emitted particles would facilitate the prediction of nanoparticles’ emission behavior and significantly contribute to their exposure assessment. In this study, a comprehensive analysis of the particle number size distribution data, with a particular focus on the ultrafine size range of synthetic clay particles emitted from a jet milling machine was conducted using the multi-lognormal fitting method. The results showed relatively high contribution of nanoparticles to the emissions in many of the tested cases, and also, that both surface treatment and feed rate of the machine are significant factors influencing the size distribution of the emitted particles of this size. In particular, applying surface treatments and increasing the machine feed rate have the similar effect of reducing the size of the particles, however, no general trend was found in variations of size distribution across different surface treatments and feed rates. The findings of our study demonstrate that for this process and other activities, where no general trend is found in the size distribution of the emitted airborne particles due to dissimilar effects of the driving factors, each case must be treated separately in terms of workplace exposure assessment and regulations.
Resumo:
Settling, dewatering and filtration of flocs are important steps in industry to remove solids and improve subsequent processing. The influence of non-sucrose impurities (Ca2+, Mg2+, phosphate and aconitic acid) on calcium phosphate floc structure (scattering exponent, Sf), size and shape were examined in synthetic and authentic sugar juices using X-ray diffraction techniques. In synthetic juices, Sf decreases with increasing phosphate concentration to values where loosely bound and branched flocs are formed for effective trapping and removal of impurities. Although, Sf did not change with increasing aconitic acid concentration, the floc size significantly decreased reducing the ability of the flocs to remove impurities. In authentic juices, the flocs structures were marginally affected by increasing proportions of non-sucrose impurities. However, optical microscopy indicated the formation of well-formed macro-floc network structures in sugar cane juices containing lower proportions of non-sucrose impurities. These structures are better placed to remove suspended colloidal solids.