986 resultados para sample complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain asymmetry has been a topic of interest for neuroscientists for many years. The advent of diffusion tensor imaging (DTI) allows researchers to extend the study of asymmetry to a microscopic scale by examining fiber integrity differences across hemispheres rather than the macroscopic differences in shape or structure volumes. Even so, the power to detect these microarchitectural differences depends on the sample size and how the brain images are registered and how many subjects are studied. We fluidly registered 4 Tesla DTI scans from 180 healthy adult twins (45 identical and fraternal pairs) to a geometrically-centered population mean template. We computed voxelwise maps of significant asymmetries (left/right hemisphere differences) for common fiber anisotropy indices (FA, GA). Quantitative genetic models revealed that 47-62% of the variance in asymmetry was due to genetic differences in the population. We studied how these heritability estimates varied with the type of registration target (T1- or T2-weighted) and with sample size. All methods consistently found that genetic factors strongly determined the lateralization of fiber anisotropy, facilitating the quest for specific genes that might influence brain asymmetry and fiber integrity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Origin-Destination matrices (ODM) estimation can benefits of the availability of sample trajectories which can be measured thanks to recent technologies. This paper focus on the case of transport networks where traffic counts are measured by magnetic loops and sample trajectories available. An example of such network is the city of Brisbane, where Bluetooth detectors are now operating. This additional data source is used to extend the classical ODM estimation to a link-specific ODM (LODM) one using a convex optimisation resolution that incorporates networks constraints as well. The proposed algorithm is assessed on a simulated network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monitoring pedestrian and cyclists movement is an important area of research in transport, crowd safety, urban design and human behaviour assessment areas. Media Access Control (MAC) address data has been recently used as potential information for extracting features from people’s movement. MAC addresses are unique identifiers of WiFi and Bluetooth wireless technologies in smart electronics devices such as mobile phones, laptops and tablets. The unique number of each WiFi and Bluetooth MAC address can be captured and stored by MAC address scanners. MAC addresses data in fact allows for unannounced, non-participatory, and tracking of people. The use of MAC data for tracking people has been focused recently for applying in mass events, shopping centres, airports, train stations etc. In terms of travel time estimation, setting up a scanner with a big value of antenna’s gain is usually recommended for highways and main roads to track vehicle’s movements, whereas big gains can have some drawbacks in case of pedestrian and cyclists. Pedestrian and cyclists mainly move in built distinctions and city pathways where there is significant noises from other fixed WiFi and Bluetooth. Big antenna’s gains will cover wide areas that results in scanning more samples from pedestrians and cyclists’ MAC device. However, anomalies (such fixed devices) may be captured that increase the complexity and processing time of data analysis. On the other hand, small gain antennas will have lesser anomalies in the data but at the cost of lower overall sample size of pedestrian and cyclist’s data. This paper studies the effect of antenna characteristics on MAC address data in terms of travel-time estimation for pedestrians and cyclists. The results of the empirical case study compare the effects of small and big antenna gains in order to suggest optimal set up for increasing the accuracy of pedestrians and cyclists’ travel-time estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Methodology Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n = 18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. Results No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. Conclusions The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To test the hypothesis that the age at onset of bipolar disorder would identify a developmental subtype of bipolar disorder in adults characterized by increased levels of irritability, chronic course, rapid cycling, and comorbidity with attention deficit hyperactivity disorder. Methods Forty-four adult subjects diagnosed with bipolar disorder were selected from large family studies of youth with and without attention deficit hyperactivity disorder. These subjects were stratified by the age at onset in childhood (younger than 13 years; n = 8, 18%), adolescence (13–18 years; n = 12, 27%, or adulthood (older than 19 years; n = 24, 55%). All subjects were administered structure diagnostic interviews and a brief cognitive battery. Results In contrast with adult-onset bipolar disorder, child-onset bipolar disorder was associated with a longer duration of illness, more irritability than euphoria, a mixed presentation, a more chronic or rapid-cycling course, and increased comorbidity with childhood disruptive behavior disorders and anxiety disorders. Conclusion Stratification by age at onset of bipolar disorder identified subgroups of adult subjects with differing clinical correlates. This pattern of correlates is consistent with findings documented in children with pediatric bipolar disorder and supports the hypothesis that child-onset bipolar disorder may represent a developmental subtype of the disorder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigated a new performance indicator to assess climbing fluency (smoothness of the hip trajectory and orientation of a climber using normalized jerk coefficients) to explore effects of practice and hold design on performance. Eight experienced climbers completed four repetitions of two, 10-m high routes with similar difficulty levels, but varying in hold graspability (holds with one edge vs holds with two edges). An inertial measurement unit was attached to the hips of each climber to collect 3D acceleration and 3D orientation data to compute jerk coefficients. Results showed high correlations (r = .99, P < .05) between the normalized jerk coefficient of hip trajectory and orientation. Results showed higher normalized jerk coefficients for the route with two graspable edges, perhaps due to more complex route finding and action regulation behaviors. This effect decreased with practice. Jerk coefficient of hip trajectory and orientation could be a useful indicator of climbing fluency for coaches as its computation takes into account both spatial and temporal parameters (ie, changes in both climbing trajectory and time to travel this trajectory)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION Although the high heritability of BMD variation has long been established, few genes have been conclusively shown to affect the variation of BMD in the general population. Extreme truncate selection has been proposed as a more powerful alternative to unselected cohort designs in quantitative trait association studies. We sought to test these theoretical predictions in studies of the bone densitometry measures BMD, BMC, and femoral neck area, by investigating their association with members of the Wnt pathway, some of which have previously been shown to be associated with BMD in much larger cohorts, in a moderate-sized extreme truncate selected cohort (absolute value BMD Z-scores = 1.5-4.0; n = 344). MATERIALS AND METHODS Ninety-six tag-single nucleotide polymorphism (SNPs) lying in 13 Wnt signaling pathway genes were selected to tag common genetic variation (minor allele frequency [MAF] > 5% with an r(2) > 0.8) within 5 kb of all exons of 13 Wnt signaling pathway genes. The genes studied included LRP1, LRP5, LRP6, Wnt3a, Wnt7b, Wnt10b, SFRP1, SFRP2, DKK1, DKK2, FZD7, WISP3, and SOST. Three hundred forty-four cases with either high or low BMD were genotyped by Illumina Goldengate microarray SNP genotyping methods. Association was tested either by Cochrane-Armitage test for dichotomous variables or by linear regression for quantitative traits. RESULTS Strong association was shown with LRP5, polymorphisms of which have previously been shown to influence total hip BMD (minimum p = 0.0006). In addition, polymorphisms of the Wnt antagonist, SFRP1, were significantly associated with BMD and BMC (minimum p = 0.00042). Previously reported associations of LRP1, LRP6, and SOST with BMD were confirmed. Two other Wnt pathway genes, Wnt3a and DKK2, also showed nominal association with BMD. CONCLUSIONS This study shows that polymorphisms of multiple members of the Wnt pathway are associated with BMD variation. Furthermore, this study shows in a practical trial that study designs involving extreme truncate selection and moderate sample sizes can robustly identify genes of relevant effect sizes involved in BMD variation in the general population. This has implications for the design of future genome-wide studies of quantitative bone phenotypes relevant to osteoporosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of deciding whether the output of a boolean circuit is determined by a partial assignment to its inputs. This problem is easily shown to be hard, i.e., co-Image Image -complete. However, many of the consequences of a partial input assignment may be determined in linear time, by iterating the following step: if we know the values of some inputs to a gate, we can deduce the values of some outputs of that gate. This process of iteratively deducing some of the consequences of a partial assignment is called propagation. This paper explores the parallel complexity of propagation, i.e., the complexity of determining whether the output of a given boolean circuit is determined by propagating a given partial input assignment. We give a complete classification of the problem into those cases that are Image -complete and those that are unlikely to be Image complete.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a general regression model with an arbitrary and unknown link function and a stochastic selection variable that determines whether the outcome variable is observable or missing. The paper proposes U-statistics that are based on kernel functions as estimators for the directions of the parameter vectors in the link function and the selection equation, and shows that these estimators are consistent and asymptotically normal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the issue of complexity for vector quantization (VQ) of wide-band speech LSF (line spectrum frequency) parameters. The recently proposed switched split VQ (SSVQ) method provides better rate-distortion (R/D) performance than the traditional split VQ (SVQ) method, even at the requirement of lower computational complexity. but at the expense of much higher memory. We develop the two stage SVQ (TsSVQ) method, by which we gain both the memory and computational advantages and still retain good R/D performance. The proposed TsSVQ method uses a full dimensional quantizer in its first stage for exploiting all the higher dimensional coding advantages and then, uses an SVQ method for quantizing the residual vector in the second stage so as to reduce the complexity. We also develop a transform domain residual coding method in this two stage architecture such that it further reduces the computational complexity. To design an effective residual codebook in the second stage, variance normalization of Voronoi regions is carried out which leads to the design of two new methods, referred to as normalized two stage SVQ (NTsSVQ) and normalized two stage transform domain SVQ (NTsTrSVQ). These two new methods have complimentary strengths and hence, they are combined in a switched VQ mode which leads to the further improvement in R/D performance, but retaining the low complexity requirement. We evaluate the performances of new methods for wide-band speech LSF parameter quantization and show their advantages over established SVQ and SSVQ methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current study examines the link between the experience of divorce in childhood and several indices of adjustment in adulthood in a large community sample of women. Results replicated previous research on the long-term correlation between parental divorce and depression and divorce in adulthood. Results further suggested that parental divorce was associated with a wide range of early risk factors, life course patterns, and several indices of adult adjustment. Regression analyses indicated that the long-term correlation between parental divorce and depression in adulthood is explained by quality of parent-child and parental marital relations (in childhood), concurrent levels of stressful life events and social support, and cohabitation. The long-term association between parental divorce and experiencing a divorce in adulthood was partly mediated through quality of parent-child relations, teenage pregnancy, leaving home before 18 years, and educational attainment.