875 resultados para parametric bootstrap
Resumo:
5-Hydroxytryptamine (5HT), commonly known as serotonin, which predominantly serves as an inhibitory neurotransmitter in the brain, has long been implicated in migraine pathophysiology. This study tested an Mspl polymorphism in the human 5HT2A receptor gene (HTR2A) and a closely linked microsatellite marker (D13S126), for linkage and association with common migraine. In the association analyses, no significant differences were found between the migraine and control populations for both the Mspl polymorphism and the D13S126 microsatellite marker. The linkage studies involving three families comprising 36 affected members were analysed using both parametric (FASTLINK) and non-parametric (MFLINK and APM) techniques. Significant close linkage was indicated between the Mspl polymorphism and the D13S126 microsatellite marker at a recombination fraction (θ) of zero (lod score=7.15). Linkage results for the Mspl polymorphism were not very informative in the three families, producing maximum and minimum lod scores of only 0.35 and 0.39 at recombination fractions (θ) of 0.2 and 0.00, respectively. However, linkage analysis between the D13S126 marker and migraine indicated significant non-linkage (lod2) up to a recombination fraction (θ) of 0.028. Results from this study exclude the HTR2A gene, which has been localized to chromosome 13q14-q21, for involvement with common migraine.
Resumo:
Migraine shows strong familial aggregation. However, the number of genes involved in the disorder is unknown and not identified. Nitric oxide is involved in the central processing of pain stimuli and plays an important role in the regulation of basal or stimulated vasodilation. Nitric oxide synthase, which controls the synthesis of nitric oxide, could possibly be a cause, or candidate gene, in migraine etiology. In this study, we detected a polymorphism for endothelial nitric oxide synthase by polymerase chain reaction and tested this for association and linkage to migraine. Results from the study did not show an association of the nitric oxide synthase microsatellite when tested in 91 affected and 85 unaffected individuals. Using the FASTLINK program for parametric linkage analysis, the polymorphism did not show significant linkage to migraine when tested in four migraine pedigrees composed of 116 individuals, 52 affected. Total LOD scores excluded linkage up to 8.5 cM between the nitric oxide synthase polymorphism and migraine. Results using the nonparametric affected pedigree member form of analysis also did not support a role for this gene in migraine etiology.
Resumo:
Double-pass counter flow v-grove collector is considered one of the most efficient solar air-collectors. In this design of the collector, the inlet air initially flows at the top part of the collector and changes direction once it reaches the end of the collector and flows below the collector to the outlet. A mathematical model is developed for this type of collector and simulation is carried out using MATLAB programme. The simulation results were verified with three distinguished research results and it was found that the simulation has the ability to predict the performance of the air collector accurately as proven by the comparison of experimental data with simulation. The difference between the predicted and experimental results is, at maximum, approximately 7% which is within the acceptable limit considering some uncertainties in the input parameter values to allow comparison. A parametric study was performed and it was found that solar radiation, inlet air temperature, flow rate and length has a significant effect on the efficiency of the air collector. Additionally, the results are compared with single flow V-groove collector.
Resumo:
1. Essential hypertension occurs in people with an underlying genetic predisposition who subject themselves to adverse environmental influences. The number of genes involved is unknown, as is the extent to which each contributes to final blood pressure and the severity of the disease. 2. In the past, studies of potential candidate genes have been performed by association (case-control) analysis of unrelated individuals or linkage (pedigree or sibpair) analysis of families. These studies have resulted in several positive findings but, as one may expect, also an enormous number of negative results. 3. In order to uncover the major genetic loci for essential hypertension, it is proposed that scanning the genome systematically in 100- 200 affected sibships should prove successful. 4. This involves genotyping sets of hypertensive sibships to determine their complement of several hundred microsatellite polymorphisms. Those that are highly informative, by having a high heterozygosity, are most suitable. Also, the markers need to be spaced sufficiently evenly across the genome so as to ensure adequate coverage. 5. Tests are performed to determine increased segregation of alleles of each marker with hypertension. The analytical tools involve specialized statistical programs that can detect such differences. Non- parametric multipoint analysis is an appropriate approach. 6. In this way, loci for essential hypertension are beginning to emerge.
Resumo:
Migraine is a frequent familial disorder that, in common with most multifactorial disorders, has an unknown etiology. The authors identified several families with multiple individuals affected by typical migraine using a single set of diagnostic criteria and studied these families for cosegregation between the disorder and markers on chromosome 19, the location of a mutation that causes a rare form of familial hemiplegic migraine (FHM). One large tested family showed both cosegregation and significant allele sharing for markers situated within or adjacent to the FHM locus. Multipoint GENEHUNTER results indicated significant excess allele sharing across a 12.6- cM region containing the FHM Ca2+ channel gene, CACNL1A4 (maximum nonparametric linkage Z score = 6.64, p = 0.0026), with a maximum parametric lod score of 1.92 obtained for a (CAG)(n) triplet repeat polymorphism situated in exon 47 of this gene. The CAG expansion did not, however, appear to be the cause of migraine in this pedigree. Other tested families showed neither cosegregation nor excess allele sharing to chromosome 19 markers. HOMOG analysis indicated heterogeneity, generating a maximum HLOD score of 3.6. It was concluded that Chr19 mutations either in the CACNL1A4 gene or a closely linked gene are implicated in some pedigrees with familial typical migraine, and that the disorder is genetically heterogeneous.
Resumo:
This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.
Resumo:
The University of Queensland UltraCommuter project is the demonstration of an ultra-light weight, low drag, energy efficient and low polluting, electric commuter vehicle equipped with a 2.5m2 on-board solar array. A key goal of the project is to make the vehicle predominantly self-sufficient from solar power for normal driving purposes , so that it does not require charging or refuelling from off-board sources. This paper examines the technical feasibility of the solar-powered commuter vehicle concept, as it applies the UltraCommuter project. A parametric description of a solar-powered commuter vehicle is presented. Real solar insolation data is then used to predict the solar driving range for the UltraCommuter and this is compared to typical urban usage patterns for commuter vehicles in Queensland. A comparative analysis of annual greenhouse gas emissions from the vehicle is also presented. The results show that the UltraCommuter’s on-board solar array can provide substantial supplementation of the energy required for normal driving, powering 90% of annual travel needs for an average QLD passenger vehicle. The vehicle also has excellent potential to reduce annual greenhouse gas emissions from the private transport sector, achieving a 98% reduction in CO2 emissions when compared to the average QLD passenger vehicle. Lastly, the vehicle battery pack provides for tolerance to consecutive days of poor weather without resorting to grid charging, giving uninterrupted functionality to the user. These results hold great promise for the technical feasibility of the solar-powered commuter vehicle concept.
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity (TFP) change and Simar and Wilson’s (2007) bootstrapped truncated regression approach which first derives bias-corrected efficiency estimates before being regressed against explanatory variables to help quantify sources of inefficiencies. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Sources of efficiency were attributed to quality of worker and flexible work arrangements while the use of foreign workers lowered efficiency.
Resumo:
Double-pass counter flow v-grove collector is considered one of the most efficient solar air-collectors. In this design of the collector, the inlet air initially flows at the top part of the collector and changes direction once it reaches the end of the collector and flows below the collector to the outlet. A mathematical model is developed for this type of collector and simulation is carried out using MATLAB programme. The simulation results were verified with three distinguished research results and it was found that the simulation has the ability to predict the performance of the air collector accurately as proven by the comparison of experimental data with simulation. The difference between the predicted and experimental results is, at maximum, approximately 7% which is within the acceptable limit considering some uncertainties in the input parameter values to allow comparison. A parametric study was performed and it was found that solar radiation, inlet air temperature, flow rate and length have a significant effect on the efficiency of the air collector. Additionally, the results are compared with single flow V-groove collector.
Resumo:
Public transport travel time variability (PTTV) is essential for understanding deteriorations in the reliability of travel time, optimizing transit schedules and route choices. This paper establishes key definitions of PTTV in which firstly include all buses, and secondly include only a single service from a bus route. The paper then analyses the day-to-day distribution of public transport travel time by using Transit Signal Priority data. A comprehensive approach using both parametric bootstrapping Kolmogorov-Smirnov test and Bayesian Information Creation technique is developed, recommends Lognormal distribution as the best descriptor of bus travel time on urban corridors. The probability density function of Lognormal distribution is finally used for calculating probability indicators of PTTV. The findings of this study are useful for both traffic managers and statisticians for planning and researching the transit systems.
Resumo:
Prophylactic surgery including hysterectomy and bilateral salpingo-oophorectomy (BSO) is recommended in BRCA positive women, while in women from the general population, hysterectomy plus BSO may increase the risk of overall mortality. The effect of hysterectomy plus BSO on women previously diagnosed with breast cancer is unknown. We used data from a population-base data linkage study of all women diagnosed with primary breast cancer in Queensland, Australia between 1997 and 2008 (n=21,067). We fitted flexible parametric breast cancer specific and overall survival models with 95% confidence intervals (also known as Royston-Parmar models) to assess the impact of risk-reducing surgery (removal of uterus, one or both ovaries). We also stratified analyses by age 20-49 and 50-79 years, respectively. Overall, 1,426 women (7%) underwent risk-reducing surgery (13% of premenopausal women and 3% of postmenopausal women). No women who had risk-reducing surgery, compared to 171 who did not have risk-reducing surgery developed a gynaecological cancer. Overall, 3,165 (15%) women died, including 2,195 (10%) from breast cancer. Hysterectomy plus BSO was associated with significantly reduced risk of death overall (adjusted HR = 0.69, 95% CI 0.53-0.89; P =0.005). Risk reduction was greater among premenopausal women, whose risk of death halved (HR, 0.45; 95% CI, 0.25-0.79; P < 0.006). This was largely driven by reduction in breast cancer-specific mortality (HR, 0.43; 95% CI, 0.24-0.79; P < 0.006). This population-based study found that risk-reducing surgery halved the mortality risk for premenopausal breast cancer patients. Replication of our results in independent cohorts, and subsequently randomised trials are needed to confirm these findings.
Resumo:
Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.
Resumo:
Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Conventionally the fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve in ISO834 [1]. However, modern commercial and residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the fire performance of LSF walls was undertaken using realistic design fire curves developed based on Eurocode parametric [2] and Barnett’s BFD [3] curves using both full scale fire tests and numerical studies. It included LSF walls without cavity insulation, and the recently developed externally insulated composite panel system. This paper presents the details of finite element models developed to simulate the full scale fire tests of LSF wall panels under realistic design fires. Finite element models of LSF walls exposed to realistic design fires were developed, and analysed under both transient and steady state fire conditions using the measured stud time-temperature curves. Transient state analyses were performed to simulate fire test conditions while steady state analyses were performed to obtain the load ratio versus time and failure temperature curves of LSF walls. Details of the developed finite element models and the results including the axial deformation and lateral deflection versus time curves, and the stud failure modes and times are presented in this paper. Comparison with fire test results demonstrate the ability of developed finite element models to predict the performance and fire resistance ratings of LSF walls under realistic design fires.
Resumo:
An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.