951 resultados para Statistical Analysis
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In 2004, with the increasing overloading restriction requirements of society in Anhui, a provincial comprehensive overloading transportation survey has been developed to take evaluations on overloading actuality and enforcement efficiency with the support of the World Bank. A total of six site surveys were conducted at Hefei, Fuyang, Luan, Wuhu, Huainan and Huangshan Areas with four main contents respectively: traffic volume, axle load, freight information and registration information. Via statistical analysis on the survey data, conclusions were gained that: vehicle overloading are very universal and serious problems at arterial highways in Anhui now. The traffic loads have far exceeded the designed endure capacity of highways and have caused prevalent premature pavement damage, especially for rigid pavement. The overloading trucks are unimpeded engaged in highway freight transportation actually due to the disordered overloading enforcement strategies and the deficient inspecting technologies.
Resumo:
In this paper, cognitive load analysis via acoustic- and CAN-Bus-based driver performance metrics is employed to assess two different commercial speech dialog systems (SDS) during in-vehicle use. Several metrics are proposed to measure increases in stress, distraction and cognitive load and we compare these measures with statistical analysis of the speech recognition component of each SDS. It is found that care must be taken when designing an SDS as it may increase cognitive load which can be observed through increased speech response delay (SRD), changes in speech production due to negative emotion towards the SDS, and decreased driving performance on lateral control tasks. From this study, guidelines are presented for designing systems which are to be used in vehicular environments.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.
Resumo:
Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework
An approach to statistical lip modelling for speaker identification via chromatic feature extraction
Resumo:
This paper presents a novel technique for the tracking of moving lips for the purpose of speaker identification. In our system, a model of the lip contour is formed directly from chromatic information in the lip region. Iterative refinement of contour point estimates is not required. Colour features are extracted from the lips via concatenated profiles taken around the lip contour. Reduction of order in lip features is obtained via principal component analysis (PCA) followed by linear discriminant analysis (LDA). Statistical speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments performed on the M2VTS1 database, show encouraging results
Resumo:
The Request For Proposal (RFP) with the design‐build (DB) procurement arrangement is a document in which an owner develops his requirements and conveys the project scope to DB contractors. Owners should provide an appropriate level of design in DB RFPs to adequately describe their requirements without compromising the prospects for innovation. This paper examines and compares the different levels of owner‐provided design in DB RFPs by the content analysis of 84 requests for RFPs for public DB projects advertised between 2000 and 2010 with an aggregate contract value of over $5.4 billion. A statistical analysis was also conducted in order to explore the relationship between the proportion of owner‐provided design and other project information, including project type, advertisement time, project size, contractor selection method, procurement process and contract type. The results show that the majority (64.8%) of the RFPs provide less than 10% of the owner‐provided design. The owner‐provided design proportion has a significant association with project type, project size, contractor selection method and contract type. In addition, owners are generally providing less design in recent years than hitherto. The research findings also provide owners with perspectives to determine the appropriate level of owner‐provided design in DB RFPs.
Resumo:
Denaturation of tissues can provide a unique biological environment for regenerative medicine application only if minimal disruption of their microarchitecture is achieved during the decellularization process. The goal is to keep the structural integrity of such a construct as functional as the tissues from which they were derived. In this work, cartilage-on-bone laminates were decellularized through enzymatic, non-ionic and ionic protocols. This work investigated the effects of decellularization process on the microarchitecture of cartiligous extracellular matrix; determining the extent of how each process deteriorated the structural organization of the network. High resolution microscopy was used to capture cross-sectional images of samples prior to and after treatment. The variation of the microarchitecture was then analysed using a well defined fast Fourier image processing algorithm. Statistical analysis of the results revealed how significant the alternations among aforementioned protocols were (p < 0.05). Ranking the treatments by their effectiveness in disrupting the ECM integrity, they were ordered as: Trypsin> SDS> Triton X-100.
Resumo:
Introduction: The ability to regulate joint stiffness and coordinate movement during landing when impaired by muscle fatigue has important implications for knee function. Unfortunately, the literature examining fatigue effects on landing mechanics suffers from a lack of consensus. Inconsistent results can be attributed to variable fatigue models, as well as grouping variable responses between individuals when statistically detecting differences between conditions. There remains a need to examine fatigue effects on knee function during landing with attention to these methodological limitations. Aim: The purpose of this study therefore, was to examine the effects of isokinetic fatigue on pre-impact muscle activity and post-impact knee mechanics during landing using singlesubject analysis. Methodology: Sixteen male university students (22.6+3.2 yrs; 1.78+0.07 m; 75.7+6.3 kg) performed maximal concentric and eccentric knee extensions in a reciprocal manner on an isokinetic dynamometer and step-landing trials on 2 occasions. On the first occasion each participant performed 20 step-landing trials from a knee-high platform followed by 75 maximal contractions on the isokinetic dynamometer. The isokinetic data was used to calculate the operational definition of fatigue. On the second occasion, with a minimum rest of 14 days, participants performed 2 sets of 20 step landing trials, followed by isokinetic exercise until the operational definition of fatigue was met and a final post-fatigue set of 20 step-landing trials. Results: Single-subject analyses revealed that isokinetic fatigue of the quadriceps induced variable responses in pre impact activation of knee extensors and flexors (frequency, onset timing and amplitude) and post-impact knee mechanics(stiffness and coordination). In general however, isokinetic fatigue induced sig nificant (p<0.05) reductions in quadriceps activation frequency, delayed onset and increased amplitude. In addition, knee stiffness was significantly (p<0.05) increased in some individuals, as well as impaired sagittal coordination. Conclusions: Pre impact activation and post-impact mechanics were adjusted in patterns that were unique to the individual, which could not be identified using traditional group-based statistical analysis. The results suggested that individuals optimised knee function differently to satisfy competing demands, such as minimising energy expenditure, as well as maximising joint stability and sensory information.
Resumo:
Introduction: Evidence concerning the alteration of knee function during landing suffers from a lack of consensus. This uncertainty can be attributed to methodological flaws, particularly in relation to the statistical analysis of variable human movement data. Aim: The aim of this study was to compare single-subject and group analysis in quantifying alterations in the magnitude and within-participant variability of knee mechanics during a step landing task. Methods: A group of healthy men (N = 12) stepped-down from a knee-high platform for 60 consecutive trials, each trial separated by a 1-minute rest. The magnitude and within-participant variability of sagittal knee stiffness and coordination of the landing leg during the immediate postimpact period were evaluated. Coordination of the knee was quantified in the sagittal plane by calculating the mean absolute relative phase of sagittal shank and thigh motion (MARP1) and between knee rotation and knee flexion (MARP2). Changes across trials were compared between both group and single-subject statistical analyses. Results: The group analysis detected significant reductions in MARP1 magnitude. However, the single-subject analyses detected changes in all dependent variables, which included increases in variability with task repetition. Between-individual variation was also present in the timing, size and direction of alterations to task repetition. Conclusion: The results have important implications for the interpretation of existing information regarding the adaptation of knee mechanics to interventions such as fatigue, footwear or landing height. It is proposed that a familiarisation session be incorporated in future experiments on a single-subject basis prior to an intervention.
Resumo:
When a community already torn by an event such as a prolonged war, is then hit by a natural disaster, the negative impact of this subsequent disaster in the longer term can be extremely devastating. Natural disasters further damage already destabilised and demoralised communities, making it much harder for them to be resilient and recover. Communities often face enormous challenges during the immediate recovery and the subsequent long term reconstruction periods, mainly due to the lack of a viable community involvement process. In post-war settings, affected communities, including those internally displaced, are often conceived as being completely disabled and are hardly ever consulted when reconstruction projects are being instigated. This lack of community involvement often leads to poor project planning, decreased community support, and an unsustainable completed project. The impact of war, coupled with the tensions created by the uninhabitable and poor housing provision, often hinders the affected residents from integrating permanently into their home communities. This paper outlines a number of fundamental factors that act as barriers to community participation related to natural disasters in post-war settings. The paper is based on a statistical analysis of, and findings from, a questionnaire survey administered in early 2012 in Afghanistan.
Resumo:
The promise of ‘big data’ has generated a significant deal of interest in the development of new approaches to research in the humanities and social sciences, as well as a range of important critical interventions which warn of an unquestioned rush to ‘big data’. Drawing on the experiences made in developing innovative ‘big data’ approaches to social media research, this paper examines some of the repercussions for the scholarly research and publication practices of those researchers who do pursue the path of ‘big data’–centric investigation in their work. As researchers import the tools and methods of highly quantitative, statistical analysis from the ‘hard’ sciences into computational, digital humanities research, must they also subscribe to the language and assumptions underlying such ‘scientificity’? If so, how does this affect the choices made in gathering, processing, analysing, and disseminating the outcomes of digital humanities research? In particular, is there a need to rethink the forms and formats of publishing scholarly work in order to enable the rigorous scrutiny and replicability of research outcomes?
Resumo:
Background Migraine is a brain disorder affecting ∼12% of the Caucasian population. Genes involved in neurological, vascular, and hormonal pathways have all been implicated in predisposing individuals to developing migraine. The migraineur presents with disabling head pain and varying symptoms of nausea, emesis, photophobia, phonophobia, and occasionally visual sensory disturbances. Biochemical and genetic studies have demonstrated dysfunction of neurotransmitters: serotonin, dopamine, and glutamate in migraine susceptibility. Glutamate mediates the transmission of excitatory signals in the mammalian central nervous system that affect normal brain function including cognition, memory and learning. The aim of this study was to investigate polymorphisms in the GRIA2 and GRIA4 genes, which encode subunits of the ionotropic AMPA receptor for association in an Australian Caucasian population. Methods Genotypes for each polymorphism were determined using high resolution melt analysis and the RFLP method. Results Statistical analysis showed no association between migraine and the GRIA2 and GRIA4 polymorphisms investigated. Conclusions Although the results of this study showed no significant association between the tested GRIA gene variants and migraine in our Australian Caucasian population further investigation of other components of the glutamatergic system may help to elucidate if there is a relationship between glutamatergic dysfunction and migraine.