962 resultados para Power variance function
Resumo:
Deposition of insoluble prion protein (PrP) in the brain in the form of protein aggregates or deposits is characteristic of the ‘transmissible spongiform encephalopathies’ (TSEs). Understanding the growth and development of these PrP aggregates is important both in attempting to the elucidate of the pathogenesis of prion disease and in the development of treatments designed to prevent or inhibit the spread of prion pathology within the brain. Aggregation and disaggregation of proteins and the diffusion of substances into the developing aggregates (surface diffusion) are important factors in the development of protein aggregates. Mathematical models suggest that if aggregation/disaggregation or surface diffusion is the predominant factor, the size frequency distribution of the resulting protein aggregates in the brain should be described by either a power-law or a log-normal model respectively. This study tested this hypothesis for two different types of PrP deposit, viz., the diffuse and florid-type PrP deposits in patients with variant Creutzfeldt-Jakob disease (vCJD). The size distributions of the florid and diffuse plaques were fitted by a power-law function in 100% and 42% of brain areas studied respectively. By contrast, the size distributions of both types of plaque deviated significantly from a log-normal model in all brain areas. Hence, protein aggregation and disaggregation may be the predominant factor in the development of the florid plaques. A more complex combination of factors appears to be involved in the pathogenesis of the diffuse plaques. These results may be useful in the design of treatments to inhibit the development of protein aggregates in vCJD.
Resumo:
Deposition of insoluble prion protein (PrP) in the brain in the form of protein aggregates or deposits is characteristic of the ‘transmissible spongiform encephalopathies’ (TSEs). Understanding the growth and development of PrP aggregates is important both in attempting to elucidate the pathogenesis of prion disease and in the development of treatments designed to inhibit the spread of prion pathology within the brain. Aggregation and disaggregation of proteins and the diffusion of substances into the developing aggregates (surface diffusion) are important factors in the development of protein deposits. Mathematical models suggest that if either aggregation/disaggregation or surface diffusion is the predominant factor, then the size frequency distribution of the resulting protein aggregates will be described by either a power-law or a log-normal model respectively. This study tested this hypothesis for two different populations of PrP deposit, viz., the diffuse and florid-type PrP deposits characteristic of patients with variant Creutzfeldt-Jakob disease (vCJD). The size distributions of the florid and diffuse deposits were fitted by a power-law function in 100% and 42% of brain areas studied respectively. By contrast, the size distributions of both types of aggregate deviated significantly from a log-normal model in all areas. Hence, protein aggregation and disaggregation may be the predominant factor in the development of the florid deposits. A more complex combination of factors appears to be involved in the pathogenesis of the diffuse deposits. These results may be useful in the design of treatments to inhibit the development of PrP aggregates in vCJD.
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.
Resumo:
Recent discussion regarding whether the noise that limits 2AFC discrimination performance is fixed or variable has focused either on describing experimental methods that presumably dissociate the effects of response mean and variance or on reanalyzing a published data set with the aim of determining how to solve the question through goodness-of-fit statistics. This paper illustrates that the question cannot be solved by fitting models to data and assessing goodness-of-fit because data on detection and discrimination performance can be indistinguishably fitted by models that assume either type of noise when each is coupled with a convenient form for the transducer function. Thus, success or failure at fitting a transducer model merely illustrates the capability (or lack thereof) of some particular combination of transducer function and variance function to account for the data, but it cannot disclose the nature of the noise. We also comment on some of the issues that have been raised in recent exchange on the topic, namely, the existence of additional constraints for the models, the presence of asymmetric asymptotes, the likelihood of history-dependent noise, and the potential of certain experimental methods to dissociate the effects of response mean and variance.
Resumo:
The use of structural health monitoring of civil structures is ever expanding and by assessing the dynamical condition of structures, informed maintenance management can be conducted at both individual and network levels. With the continued growth of information age technology, the potential arises for smart monitoring systems to be integrated with civil infrastructure to provide efficient information on the condition of a structure. The focus of this thesis is the integration of smart technology with civil infrastructure for the purposes of structural health monitoring. The technology considered in this regard are devices based on energy harvesting materials. While there has been considerable focus on the development and optimisation of such devices using steady state loading conditions, their applications for civil infrastructure are less known. Although research is still in initial stages, studies into the uses associated with such applications are very promising. Through the use of the dynamical response of structures to a variety of loading conditions, the energy harvesting outputs from such devices is established and the potential power output determined. Through a power variance output approach, damage detection of deteriorating structures using the energy harvesting devices is investigated. Further applications of the integration of energy harvesting devices with civil infrastructure investigated by this research includes the use of the power output as a indicator for control. Four approaches are undertaken to determine the potential applications arising from integrating smart technology with civil infrastructure, namely • Theoretical analysis to determine the applications of energy harvesting devices for vibration based health monitoring of civil infrastructure. • Laboratory experimentation to verify the performance of different energy harvesting configurations for civil infrastructure applications. • Scaled model testing as a method to experimentally validate the integration of the energy harvesting devices with civil infrastructure. • Full scale deployment of energy harvesting device with a bridge structure. These four approaches validate the application of energy harvesting technology with civil infrastructure from a theoretical, experimental and practical perspective.
Resumo:
The BL Lac object 1ES 1011+496 was discovered at Very High Energy (VHE, E>100GeV) γ-rays by MAGIC in spring 2007. Before that the source was little studied in different wavelengths. Therefore a multi-wavelength (MWL) campaign was organized in spring 2008. Along MAGIC, the MWL campaign included the Mets¨ahovi radio observatory, Bell and KVA optical telescopes and the Swift and AGILE satellites. MAGIC observations span from March to May, 2008 for a total of 27.9 hours, of which 19.4 hours remained after quality cuts. The light curve showed no significant variability yielding an integral flux above 200 GeV of (1.3 ± 0.3) × 10^(−11) photons cm^(−2) s^( −1) . The differential VHE spectrum could be described with a power-law function with a spectral index of 3.3 ± 0.4. Both results were similar to those obtained during the discovery. Swift XRT observations revealed an X-ray flare, characterized by a harder-when-brighter trend, as is typical for high synchrotron peak BL Lac objects (HBL). Strong optical variability was found during the campaign, but no conclusion on the connection between the optical and VHE γ-ray bands could be drawn. The contemporaneous SED shows a synchrotron dominated source, unlike concluded in previous work based on non-simultaneous data, and is well described by a standard one–zone synchrotron self–Compton model. We also performed a study on the source classification. While the optical and X-ray data taken during our campaign show typical characteristics of an HBL, we suggest, based on archival data, that 1ES 1011+496 is actually a borderline case between intermediate and high synchrotron peak frequency BL Lac objects.
Resumo:
Optical waveguides have shown promising results for use within printed circuit boards. These optical waveguides have higher bandwidth than traditional copper transmission systems and are immune to electromagnetic interference. Design parameters for these optical waveguides are needed to ensure an optimal link budget. Modeling and simulation methods are used to determine the optimal design parameters needed in designing the waveguides. As a result, optical structures necessary for incorporating optical waveguides into printed circuit boards are designed and optimized. Embedded siloxane polymer waveguides are investigated for their use in optical printed circuit boards. This material was chosen because it has low absorption, high temperature stability, and can be deposited using common processing techniques. Two sizes of waveguides are investigated, 50 $unit{mu m}$ multimode and 4 - 9 $unit{mu m}$ single mode waveguides. A beam propagation method is developed for simulating the multimode and single mode waveguide parameters. The attenuation of simulated multimode waveguides are able to match the attenuation of fabricated waveguides with a root mean square error of 0.192 dB. Using the same process as the multimode waveguides, parameters needed to ensure a low link loss are found for single mode waveguides including maximum size, minimum cladding thickness, minimum waveguide separation, and minimum bend radius. To couple light out-of-plane to a transmitter or receiver, a structure such as a vertical interconnect assembly (VIA) is required. For multimode waveguides the optimal placement of a total internal reflection mirror can be found without prior knowledge of the waveguide length. The optimal placement is found to be either 60 µm or 150 µm away from the end of the waveguide depending on which metric a designer wants to optimize the average output power, the output power variance, or the maximum possible power loss. For single mode waveguides a volume grating coupler is designed to couple light from a silicon waveguide to a polymer single mode waveguide. A focusing grating coupler is compared to a perpendicular grating coupler that is focused by a micro-molded lens. The focusing grating coupler had an optical loss of over -14 dB, while the grating coupler with a lens had an optical loss of -6.26 dB.
Resumo:
How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.
Resumo:
Seven years (2003–2010) of measured shortwave (SW) irradiances were used to obtain estimates of the 10 min averaged effective cloud optical thickness (ECOT) and of the shortwave cloud radiative effect (CRESW) at the surface in a mid-latitude site (Évora — south of Portugal), and its seasonal variability is presented. The ECOT, obtained using transmittance measurements at 415 nm, was compared with the correspondent MODIS cloud optical thickness (MODIS COT) for non-precipitating water clouds and cloud fractions higher than 0.25. This comparison showed that the ECOT represents well the cloud optical thickness over the study area. The CRESW, determined for two SW broadband ranges (300–1100 nm; 285–2800 nm), was normalized (NCRESW) and related with the obtained ECOT. A logarithmic relation between NCRESW and ECOT was found for both SW ranges, presenting lower dispersion for overcast-sky situations than for partially cloudy-sky situations. The NCRESW efficiency (NCRESW per unit of ECOT) was also related with the ECOT for overcast-sky conditions. The relation found is parameterized by a power law function showing that NCRESW efficiency decreases as the ECOT increases, approaching one for ECOT values higher than about 50.
Resumo:
The linear relationship between work accomplished (W-lim) and time to exhaustion (t(lim)) can be described by the equation: W-lim = a + CP.t(lim). Critical power (CP) is the slope of this line and is thought to represent a maximum rate of ATP synthesis without exhaustion, presumably an inherent characteristic of the aerobic energy system. The present investigation determined whether the choice of predictive tests would elicit significant differences in the estimated CP. Ten female physical education students completed, in random order and on consecutive days, five art-out predictive tests at preselected constant-power outputs. Predictive tests were performed on an electrically-braked cycle ergometer and power loadings were individually chosen so as to induce fatigue within approximately 1-10 mins. CP was derived by fitting the linear W-lim-t(lim) regression and calculated three ways: 1) using the first, third and fifth W-lim-t(lim) coordinates (I-135), 2) using coordinates from the three highest power outputs (I-123; mean t(lim) = 68-193 s) and 3) using coordinates from the lowest power outputs (I-345; mean t(lim) = 193-485 s). Repeated measures ANOVA revealed that CPI123 (201.0 +/- 37.9W) > CPI135 (176.1 +/- 27.6W) > CPI345 (164.0 +/- 22.8W) (P < 0.05). When the three sets of data were used to fit the hyperbolic Power-t(lim) regression, statistically significant differences between each CP were also found (P < 0.05). The shorter the predictive trials, the greater the slope of the W-lim-t(lim) regression; possibly because of the greater influence of 'aerobic inertia' on these trials. This may explain why CP has failed to represent a maximal, sustainable work rate. The present findings suggest that if CP is to represent the highest power output that an individual can maintain for a very long time without fatigue then CP should be calculated over a range of predictive tests in which the influence of aerobic inertia is minimised.
Resumo:
BACKGROUND: We estimated the heritability of three measures of glomerular filtration rate (GFR) in hypertensive families of African descent in the Seychelles (Indian Ocean). METHODS: Families with at least two hypertensive siblings and an average of two normotensive siblings were identified through a national hypertension register. Using the ASSOC program in SAGE (Statistical Analysis in Genetic Epidemiology), the age- and gender-adjusted narrow sense heritability of GFR was estimated by maximum likelihood assuming multivariate normality after power transformation. ASSOC can calculate the additive polygenic component of the variance of a trait from pedigree data in the presence of other familial correlations. The effects of body mass index (BMI), blood pressure, natriuresis, along with sodium to potassium ratio in urine and diabetes, were also tested as covariates. RESULTS: Inulin clearance, 24-hour creatinine clearance, and GFR based on the Cockcroft-Gault formula were available for 348 persons from 66 pedigrees. The age- and gender-adjusted correlations (+/- SE) were 0.51 (+/- 0.04) between inulin clearance and creatinine clearance, 0.53 (+/- 0.04) between inulin clearance and Cockcroft-Gault formula and 0.66 (+/- 0.03) between creatinine clearance and Cockcroft-Gault formula. The age- and gender-adjusted heritabilities (+/- SE) of GFR were 0.41 (+/- 0.10) for inulin clearance, 0.52 (+/- 0.13) for creatinine clearance, and 0.82 (+/- 0.09) for Cockcroft-Gault formula. Adjustment for BMI slightly lowered the correlations and heritabilities for all measurements whereas adjustment for blood pressure had virtually no effect. CONCLUSION: The significant heritability estimates of GFR in our sample of families of African descent confirm the familial aggregation of this trait and justify further analyses aimed at discovering genetic determinants of GFR.
Resumo:
Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.
Resumo:
We have devised a program that allows computation of the power of F-test, and hence determination of appropriate sample and subsample sizes, in the context of the one-way hierarchical analysis of variance with fixed effects. The power at a fixed alternative is an increasing function of the sample size and of the subsample size. The program makes it easy to obtain the power of F-test for a range of values of sample and subsample sizes, and therefore the appropriate sizes based on a desired power. The program can be used for the 'ordinary' case of the one-way analysis of variance, as well as for hierarchical analysis of variance with two stages of sampling. Examples are given of the practical use of the program.
Resumo:
The tire inflation pressure, among other factors, determines the efficiency in which a tractor can exert traction. It was studied the effect of using two tire inflation pressures, 110.4 kPa in the front and rear wheels, 124.2 kPa in the front wheel and 138 kPa in the rear wheels, the energetic efficiency of an agricultural tractor of 147 kW of engine power, in the displacement speed of 6.0 km.h-1, on track with firm surface, with the tractor engine speed of 2000 rpm. For each condition of the tire pressure, the tested tractor was subjected to constant forces in the drawbar of 45 kN and 50 kN, covering 30 meters. It was used a randomized complete block with a 2x2 factorial arrangement (tire pressure and drawbar power) with four replications, totaling 16 experimental units. Data were subjected to analysis of variance, using the Tukey test at 5% probability for comparison averages. The lowest hourly and specific fuel consumption, the lowest slippage of the wheelsets and the highest efficiency in the drawbar was obtained with the tire inflation pressure of 110.4 kPa in the front and rear tires of the tractor, highlighting that lower pressures improve energetic and operational performance of the tractor.
Resumo:
In this paper we introduce a new Wiener system modeling approach for memory high power amplifiers in communication systems using observational input/output data. By assuming that the nonlinearity in the Wiener model is mainly dependent on the input signal amplitude, the complex valued nonlinear static function is represented by two real valued B-spline curves, one for the amplitude distortion and another for the phase shift, respectively. The Gauss-Newton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first order derivatives recursion. An illustrative example is utilized to demonstrate the efficacy of the proposed approach.