986 resultados para Square-law nonlinearity symbol timing estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. We quantified the main sequence of spontaneous blinks in normal subjects and Graves' disease patients with upper eyelid retraction using a nonlinear and two linear models, and examined the variability of the main sequence estimated with standard linear regression for 10-minute periods of time. Methods. A total of 20 normal subjects and 12 patients had their spontaneous blinking measured with the magnetic search coil technique when watching a video during one hour. The main sequence was estimated with a power-law function, and with standard and trough the origin linear regressions. Repeated measurements ANOVA was used to test the mean sequence stability of 10-minute bins measured with standard linear regression. Results. In 95% of the sample the correlation coefficients of the main sequence ranged from 0.60 to 0.94. Homoscedasticity of the peak velocity was not verified in 20% of the subjects and 25% of the patients. The power-law function provided the best main sequence fitting for subjects and patients. The mean sequence of 10-minute bins measured with standard linear regression did not differ from the one-hour period value. For the entire period of observation and the slope obtained by standard linear regression, the main sequence of the patients was reduced significantly compared to the normal subjects. Conclusions. Standard linear regression is a valid and stable approximation for estimating the main sequence of spontaneous blinking. However, the basic assumptions of the linear regression model should be examined on an individual basis. The maximum velocity of large blinks is slower in Graves' disease patients than in normal subjects. © 2013 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seqüências tipo mitocondriais têm comumente sido encontradas no genoma nuclear de diversos organismos. Quando acidentalmente incluídas em estudos de seqüências mitocondriais, diversas conclusões errôneas podem ser obtidas. No entanto, estes pseudogenes nucleares tipo mitocondriais podem ser usados para a estimativa da taxa relativa de evolução de genes mitocondriais e também como grupo externo em análises filogenéticas. No presente trabalho, seqüências mitocondriais com características do tipo de pseudogene, tais como deleções e/ou inserções e códons de parada, foram encontradas em tamarins (Saguinus spp., Callitrichinae, Primates). A análise filogenética permitiu a estimativa do tempo da migração da seqüência mitocondrial para o genoma nuclear e algumas inferências filogenéticas. A escolha de um grupo externo não adequado (Aotus infulatus) não permitiu uma reconstrução filogenética confiável da subfamília Callitrichinae. A divergência bastante antiga de Cebidae (Callitrichinae, Aotinae e Cebinae) pode ter favorecido o aparecimento de homoplasias, obscurecendo a análise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural durability is an important design criterion, which must be assessed for every type of structure. In this regard, especial attention must be addressed to the durability of reinforced concrete (RC) structures. When RC structures are located in aggressive environments, its durability is strongly reduced by physical/chemical/mechanical processes that trigger the corrosion of reinforcements. Among these processes, the diffusion of chlorides is recognized as one of major responsible of corrosion phenomenon start. To accurate modelling the corrosion of reinforcements and to assess the durability of RC structures, a mechanical model that accounts realistically for both concrete and steel mechanical behaviour must be considered. In this context, this study presents a numerical nonlinear formulation based on the finite element method applied to structural analysis of RC structures subjected to chloride penetration and reinforcements corrosion. The physical nonlinearity of concrete is described by Mazars damage model whereas for reinforcements elastoplastic criteria are adopted. The steel loss along time due to corrosion is modelled using an empirical approach presented in literature and the chloride concentration growth along structural cover is represented by Fick's law. The proposed model is applied to analysis of bended structures. The results obtained by the proposed numerical approach are compared to responses available in literature in order to illustrate the evolution of structural resistant load after corrosion start. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of the lower flammability limits of C-H compounds at 25 degrees C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 degrees C H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C H compounds. When tested for a temperature range from 5 degrees C to 100 degrees C , the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we introduce a three-parameter extension of the bivariate exponential-geometric (BEG) law (Kozubowski and Panorska, 2005) [4]. We refer to this new distribution as the bivariate gamma-geometric (BGG) law. A bivariate random vector (X, N) follows the BGG law if N has geometric distribution and X may be represented (in law) as a sum of N independent and identically distributed gamma variables, where these variables are independent of N. Statistical properties such as moment generation and characteristic functions, moments and a variance-covariance matrix are provided. The marginal and conditional laws are also studied. We show that BBG distribution is infinitely divisible, just as the BEG model is. Further, we provide alternative representations for the BGG distribution and show that it enjoys a geometric stability property. Maximum likelihood estimation and inference are discussed and a reparametrization is proposed in order to obtain orthogonality of the parameters. We present an application to a real data set where our model provides a better fit than the BEG model. Our bivariate distribution induces a bivariate Levy process with correlated gamma and negative binomial processes, which extends the bivariate Levy motion proposed by Kozubowski et al. (2008) [6]. The marginals of our Levy motion are a mixture of gamma and negative binomial processes and we named it BMixGNB motion. Basic properties such as stochastic self-similarity and the covariance matrix of the process are presented. The bivariate distribution at fixed time of our BMixGNB process is also studied and some results are derived, including a discussion about maximum likelihood estimation and inference. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extraction of information about neural activity timing from BOLD signal is a challenging task as the shape of the BOLD curve does not directly reflect the temporal characteristics of electrical activity of neurons. In this work, we introduce the concept of neural processing time (NPT) as a parameter of the biophysical model of the hemodynamic response function (HRF). Through this new concept we aim to infer more accurately the duration of neuronal response from the highly nonlinear BOLD effect. The face validity and applicability of the concept of NPT are evaluated through simulations and analysis of experimental time series. The results of both simulation and application were compared with summary measures of HRF shape. The experiment that was analyzed consisted of a decision-making paradigm with simultaneous emotional distracters. We hypothesize that the NPT in primary sensory areas, like the fusiform gyrus, is approximately the stimulus presentation duration. On the other hand, in areas related to processing of an emotional distracter, the NPT should depend on the experimental condition. As predicted, the NPT in fusiform gyrus is close to the stimulus duration and the NPT in dorsal anterior cingulate gyrus depends on the presence of an emotional distracter. Interestingly, the NPT in right but not left dorsal lateral prefrontal cortex depends on the stimulus emotional content. The summary measures of HRF obtained by a standard approach did not detect the variations observed in the NPT. Hum Brain Mapp, 2012. (C) 2010 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse - the "first law" of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: To investigate if non-rigid image-registration reduces motion artifacts in triggered and non-triggered diffusion tensor imaging (DTI) of native kidneys. A secondary aim was to determine, if improvements through registration allow for omitting respiratory-triggering. METHODS: Twenty volunteers underwent coronal DTI of the kidneys with nine b-values (10-700 s/mm2 ) at 3 Tesla. Image-registration was performed using a multimodal nonrigid registration algorithm. Data processing yielded the apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA). For comparison of the data stability, the root mean square error (RMSE) of the fitting and the standard deviations within the regions of interest (SDROI ) were evaluated. RESULTS: RMSEs decreased significantly after registration for triggered and also for non-triggered scans (P < 0.05). SDROI for ADC, FA, and FP were significantly lower after registration in both medulla and cortex of triggered scans (P < 0.01). Similarly the SDROI of FA and FP decreased significantly in non-triggered scans after registration (P < 0.05). RMSEs were significantly lower in triggered than in non-triggered scans, both with and without registration (P < 0.05). CONCLUSION: Respiratory motion correction by registration of individual echo-planar images leads to clearly reduced signal variations in renal DTI for both triggered and particularly non-triggered scans. Secondarily, the results suggest that respiratory-triggering still seems advantageous.J. Magn. Reson. Imaging 2014. (c) 2014 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pressure–Temperature–time (P–T–t) estimates of the syn-kinematic strain at the peak-pressure conditions reached during shallow underthrusting of the Briançonnais Zone in the Alpine subduction zone was made by thermodynamic modelling and 40Ar/39Ar dating in the Plan-de-Phasy unit (SE of the Pelvoux Massif, Western Alps). The dated phengite minerals crystallized syn-kinematically in a shear zone indicating top-to-the-N motion. By combining X-ray mapping with multi-equilibrium calculations, we estimate the phengite crystallization conditions at 270 ± 50 °C and 8.1 ± 2 kbar at an age of 45.9 ± 1.1 Ma. Combining this P–T–t estimate with data from the literature allows us to constrain the timing and geometry of Alpine continental subduction. We propose that the Briançonnais units were scalped on top of the slab during ongoing continental subduction and exhumed continuously until collision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon isotopically based estimates of CO2 levels have been generated from a record of the photosynthetic fractionation of 13C (epsilon p) in a central equatorial Pacific sediment core that spans the last ~255 ka. Contents of 13C in phytoplanktonic biomass were determined by analysis of C37 alkadienones. These compounds are exclusive products of Prymnesiophyte algae which at present grow most abundantly at depths of 70-90 m in the central equatorial Pacific. A record of the isotopic compostion of dissolved CO2 was constructed from isotopic analyses of the planktonic foraminifera Neogloboquadrina dutertrei, which calcifies at 70-90 m in the same region. Values of epsilon p, derived by comparison of the organic and inorganic delta values, were transformed to yield concentrations of dissolved CO2 (c e) based on a new, site-specific calibration of the relationship between epsilon p and c e. The calibration was based on reassessment of existing epsilon p versus c e data, which support a physiologically based model in which epsilon p is inversely related to c e. Values of PCO2, the partial pressure of CO2 that would be in equilibrium with the estimated concentrations of dissolved CO2, were calculated using Henry's law and the temperature determined from the alkenone-unsaturation index UK 37. Uncertainties in these values arise mainly from uncertainties about the appropriateness (particularly over time) of the site-specific relationship between epsilon p and 1/c e. These are discussed in detail and it is concluded that the observed record of epsilon p most probably reflects significant variations in Delta pCO2, the ocean-atmosphere disequilibrium, which appears to have ranged from ~110 µatm during glacial intervals (ocean > atmosphere) to ~60 µatm during interglacials. Fluxes of CO2 to the atmosphere would thus have been significantly larger during glacial intervals. If this were characteristic of large areas of the equatorial Pacific, then greater glacial sinks for the equatorially evaded CO2 must have existed elsewhere. Statistical analysis of air-sea pCO2 differences and other parameters revealed significant (p < 0.01) inverse correlations of Delta pCO2 with sea surface temperature and with the mass accumulation rate of opal. The former suggests response to the strength of upwelling, the latter may indicate either drawdown of CO2 by siliceous phytoplankton or variation of [CO2]/[Si(OH)4] ratios in upwelling waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the six Heinrich Events of the last 70 ka episodic calving from the circum-Atlantic ice sheets released large numbers of icebergs into the North Atlantic. These icebergs and associated melt-water flux are hypothesized to have led to a shutdown of Atlantic Meridional Overturning Circulation (AMOC) and severe cooling in large parts of the Northern Hemisphere. However, due to the limited availability of high-resolution records the magnitude sea surface temperature (SST) changes related to the impact of Heinrich Events on the mid-latitude North Atlantic is poorly constrained. Here we present a record of UK37'-based SSTs derived from sediments of Integrated Ocean Drilling Project (IODP) Site U1313, located at the southern end of the ice-rafted debris (IRD)-belt in the mid-latitude North Atlantic (41°N). We demonstrate that all six Heinrich Events are associated with a rapid warming of surface waters by 2 to 4°C in a few thousand years. The presence of IRD leaves no doubt about the simultaneous timing and correlation between rapid surface water warming and Heinrich Events. We argue that this warming in the mid-latitude North Atlantic is related to a northward expansion of the subtropical gyre during Heinrich Events. As a wide-range of studies demonstrated that in the central IRD-belt Heinrich Events are associated with low SSTs, these results thus identify an anti-phased (seesaw) pattern in SSTs during Heinrich Events between the mid-latitude (warm) and northern North Atlantic (cold). This highlights the complex response of surface water characteristics in the North Atlantic to Heinrich Events that is poorly reproduced by fresh water hosing experiments and challenges the widely accepted view that within the IRD-belt of the North Atlantic Heinrich Events coincide with periods of low SSTs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.