928 resultados para python django bootstrap
Resumo:
Passive positioning systems produce user location information for third-party providers of positioning services. Since the tracked wireless devices do not participate in the positioning process, passive positioning can only rely on simple, measurable radio signal parameters, such as timing or power information. In this work, we provide a passive tracking system for WiFi signals with an enhanced particle filter using fine-grained power-based ranging. Our proposed particle filter provides an improved likelihood function on observation parameters and is equipped with a modified coordinated turn model to address the challenges in a passive positioning system. The anchor nodes for WiFi signal sniffing and target positioning use software defined radio techniques to extract channel state information to mitigate multipath effects. By combining the enhanced particle filter and a set of enhanced ranging methods, our system can track mobile targets with an accuracy of 1.5m for 50% and 2.3m for 90% in a complex indoor environment. Our proposed particle filter significantly outperforms the typical bootstrap particle filter, extended Kalman filter and trilateration algorithms.
Resumo:
OBJECTIVES Improvement of skin fibrosis is part of the natural course of diffuse cutaneous systemic sclerosis (dcSSc). Recognising those patients most likely to improve could help tailoring clinical management and cohort enrichment for clinical trials. In this study, we aimed to identify predictors for improvement of skin fibrosis in patients with dcSSc. METHODS We performed a longitudinal analysis of the European Scleroderma Trials And Research (EUSTAR) registry including patients with dcSSc, fulfilling American College of Rheumatology criteria, baseline modified Rodnan skin score (mRSS) ≥7 and follow-up mRSS at 12±2 months. The primary outcome was skin improvement (decrease in mRSS of >5 points and ≥25%) at 1 year follow-up. A respective increase in mRSS was considered progression. Candidate predictors for skin improvement were selected by expert opinion and logistic regression with bootstrap validation was applied. RESULTS From the 919 patients included, 218 (24%) improved and 95 (10%) progressed. Eleven candidate predictors for skin improvement were analysed. The final model identified high baseline mRSS and absence of tendon friction rubs as independent predictors of skin improvement. The baseline mRSS was the strongest predictor of skin improvement, independent of disease duration. An upper threshold between 18 and 25 performed best in enriching for progressors over regressors. CONCLUSIONS Patients with advanced skin fibrosis at baseline and absence of tendon friction rubs are more likely to regress in the next year than patients with milder skin fibrosis. These evidence-based data can be implemented in clinical trial design to minimise the inclusion of patients who would regress under standard of care.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.
Resumo:
Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^
Resumo:
This study examines the relationship between stock market reaction to horizontal merger announcements and technical efficiency levels of the participating firms. The analysis is based on data pertaining to eighty mergers between firms in the U.S. manufacturing industry during the 1990s. We employ Data Envelopment Analysis (DEA) to measure technical efficiency, which capture the firms. competence to produce the maximum output given certain productive resources. Abnormal returns related to the merger announcements provide the investor.s re-evaluation on the future performance of the participating firms. In order to avoid the problem of nonnormality, heteroskedasticity in the regression analysis, bootstrap method is employed for estimations and inferences. We found that there is a significant relationship between technical efficiency and market response. The market apparently welcomes the merger as an arrangement to improve resource utilizations.
Resumo:
In recent years, disaster preparedness through assessment of medical and special needs persons (MSNP) has taken a center place in public eye in effect of frequent natural disasters such as hurricanes, storm surge or tsunami due to climate change and increased human activity on our planet. Statistical methods complex survey design and analysis have equally gained significance as a consequence. However, there exist many challenges still, to infer such assessments over the target population for policy level advocacy and implementation. ^ Objective. This study discusses the use of some of the statistical methods for disaster preparedness and medical needs assessment to facilitate local and state governments for its policy level decision making and logistic support to avoid any loss of life and property in future calamities. ^ Methods. In order to obtain precise and unbiased estimates for Medical Special Needs Persons (MSNP) and disaster preparedness for evacuation in Rio Grande Valley (RGV) of Texas, a stratified and cluster-randomized multi-stage sampling design was implemented. US School of Public Health, Brownsville surveyed 3088 households in three counties namely Cameron, Hidalgo, and Willacy. Multiple statistical methods were implemented and estimates were obtained taking into count probability of selection and clustering effects. Statistical methods for data analysis discussed were Multivariate Linear Regression (MLR), Survey Linear Regression (Svy-Reg), Generalized Estimation Equation (GEE) and Multilevel Mixed Models (MLM) all with and without sampling weights. ^ Results. Estimated population for RGV was 1,146,796. There were 51.5% female, 90% Hispanic, 73% married, 56% unemployed and 37% with their personal transport. 40% people attained education up to elementary school, another 42% reaching high school and only 18% went to college. Median household income is less than $15,000/year. MSNP estimated to be 44,196 (3.98%) [95% CI: 39,029; 51,123]. All statistical models are in concordance with MSNP estimates ranging from 44,000 to 48,000. MSNP estimates for statistical methods are: MLR (47,707; 95% CI: 42,462; 52,999), MLR with weights (45,882; 95% CI: 39,792; 51,972), Bootstrap Regression (47,730; 95% CI: 41,629; 53,785), GEE (47,649; 95% CI: 41,629; 53,670), GEE with weights (45,076; 95% CI: 39,029; 51,123), Svy-Reg (44,196; 95% CI: 40,004; 48,390) and MLM (46,513; 95% CI: 39,869; 53,157). ^ Conclusion. RGV is a flood zone, most susceptible to hurricanes and other natural disasters. People in the region are mostly Hispanic, under-educated with least income levels in the U.S. In case of any disaster people in large are incapacitated with only 37% have their personal transport to take care of MSNP. Local and state government’s intervention in terms of planning, preparation and support for evacuation is necessary in any such disaster to avoid loss of precious human life. ^ Key words: Complex Surveys, statistical methods, multilevel models, cluster randomized, sampling weights, raking, survey regression, generalized estimation equations (GEE), random effects, Intracluster correlation coefficient (ICC).^
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^
Resumo:
En el presente trabajo se caracterizará al Software Libre, sus ventajas técnicas, morales y pedagógicas por sobre sus contrapartes privativas. Es mi convicción que la Universidad pública debe enseñar métodos de trabajo y no a utilizar una herramienta en particular; principalmente porque los programas de mayor difusión en el ámbito académico no permiten su distribución ni el estudio de su funcionamiento interno. Se presentarán los principales reemplazos a MATLAB y Maple (SAGE, Python+NumPy y Maxima) y se dará una introducción a estos con ejemplos reales.
Resumo:
En el presente trabajo se caracterizará al Software Libre, sus ventajas técnicas, morales y pedagógicas por sobre sus contrapartes privativas. Es mi convicción que la Universidad pública debe enseñar métodos de trabajo y no a utilizar una herramienta en particular; principalmente porque los programas de mayor difusión en el ámbito académico no permiten su distribución ni el estudio de su funcionamiento interno. Se presentarán los principales reemplazos a MATLAB y Maple (SAGE, Python+NumPy y Maxima) y se dará una introducción a estos con ejemplos reales.
Resumo:
En el presente trabajo se caracterizará al Software Libre, sus ventajas técnicas, morales y pedagógicas por sobre sus contrapartes privativas. Es mi convicción que la Universidad pública debe enseñar métodos de trabajo y no a utilizar una herramienta en particular; principalmente porque los programas de mayor difusión en el ámbito académico no permiten su distribución ni el estudio de su funcionamiento interno. Se presentarán los principales reemplazos a MATLAB y Maple (SAGE, Python+NumPy y Maxima) y se dará una introducción a estos con ejemplos reales.
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).
Resumo:
Data on zooplankton abundance and biovolume were collected in concert with data on the biophysical environment at 9 stations in the North Atlantic, from the Iceland Basin in the East to the Labrador Sea in the West. The data were sampled along vertical profiles by a Laser Optical Plankton Counter (LOPC, Rolls Royce Canada Ltd.) that was mounted on a carousel water sampler together with a Conductivity-Temperature-Depth sensor (CTD, SBE19plusV2, Seabird Electronics, Inc., USA) and a fluorescence sensor (F, ECO Puck chlorophyll a fluorometer, WET Labs Inc., USA). Based on the LOPC data, abundance (individuals/m**3) and biovolume (mm3/m**3) were calculated as described in the LOPC Software Operation Manual [(Anonymous, 2006), http://www.brooke-ocean.com/index.html]. LOPC data were regrouped into 49 size groups of equal log10(body volume) increments, see Edvardsen et al. (2002, doi:10.3354/meps227205). LOPC data quality was checked as described in Basedow et al. (2013, doi:10.1016/j.pocean.2012.10.005). Fluorescence was roughly converted into chlorophyll based on filtered chlorophyll values obtained from station 10 in the Labrador Sea. Due to the low number of filtered samples that was used for the conversion the resulting chlorophyll values should be considered with care. CTD data were screened for erroneous (out of range) values and then averaged to the same frequency as the LOPC data (2 Hz). All data were processed using especially developed scripts in the python programming language. The LOPC is an optical instrument designed to count and measure particles (0.1 to 30 mm equivalent spherical diameter) in the water column, see Herman et al., (2004, doi:10.1093/plankt/fbh095). The size of particles as equivalent spherical diameter (ESD) was computed as described in the manual (Anonymous, 2006), and in more detail in Checkley et al. (2008, doi:10.4319/lo.2008.53.5_part_2.2123) and Gaardsted et al. (2010, doi:10.1111/j.1365-2419.2010.00558.x).