964 resultados para rank order tournaments
Resumo:
The potential to sequester atmospheric carbon in agricultural and forest soils to offset greenhouse gas emissions has generated interest in measuring changes in soil carbon resulting from changes in land management. However, inherent spatial variability of soil carbon limits the precision of measurement of changes in soil carbon and hence, the ability to detect changes. We analyzed variability of soil carbon by intensively sampling sites under different land management as a step toward developing efficient soil sampling designs. Sites were tilled crop-land and a mixed deciduous forest in Tennessee, and old-growth and second-growth coniferous forest in western Washington, USA. Six soil cores within each of three microplots were taken as an initial sample and an additional six cores were taken to simulate resampling. Soil C variability was greater in Washington than in Tennessee, and greater in less disturbed than in more disturbed sites. Using this protocol, our data suggest that differences on the order of 2.0 Mg C ha(-1) could be detected by collection and analysis of cores from at least five (tilled) or two (forest) microplots in Tennessee. More spatial variability in the forested sites in Washington increased the minimum detectable difference, but these systems, consisting of low C content sandy soil with irregularly distributed pockets of organic C in buried logs, are likely to rank among the most spatially heterogeneous of systems. Our results clearly indicate that consistent intramicroplot differences at all sites will enable detection of much more modest changes if the same microplots are resampled.
Resumo:
Purpose: We compared subjective blur limits for defocus and the higher-order aberrations of coma, trefoil, and spherical aberration. ---------- Methods: Spherical aberration was presented in both Zernike and Seidel forms. Black letter targets (0.1, 0.35, and 0.6 logMAR) on white backgrounds were blurred using an adaptive optics system for six subjects under cycloplegia with 5 mm artificial pupils. Three blur criteria of just noticeable, just troublesome, and just objectionable were used.---------- Results: When expressed as wave aberration coefficients, the just noticeable blur limits for coma and trefoil were similar to those for defocus, whereas the just noticeable limits for Zernike spherical aberration and Seidel spherical aberration (the latter given as an “rms equivalent”) were considerably smaller and larger, respectively, than defocus limits.---------- Conclusions: Blur limits increased more quickly for the higher order aberrations than for defocus as the criterion changed from just noticeable to just troublesome and then to just objectionable.
Resumo:
The problem of delays in the construction industry is a global phenomenon and the construction industry in Brunei Darussalam is no exception. The goal of all parties involved in construction projects – owners, contractors, engineers and consultants in either the public or private sector is to successfully complete the project on schedule, within planned budget, with the highest quality and in the safest manner. Construction projects are frequently influenced by either success factors that help project parties reach their goal as planned, or delay factors that stifle or postpone project completion. The purpose of this research is to identify success and delay factors which can help project parties reach their intended goals with greater efficiency. This research extracted seven of the most important success factors according to the literature and seven of the most important delay factors identified by project parties, and then examined correlations between them to determine which were the most influential in preventing project delays. This research uses a comprehensive literature review to design and conduct a survey to investigate success and delay factors and then obtain a consensus of expert opinion using the Delphi methodology to rank the most needed critical success factors for Brunei construction projects. A specific survey was distributed to owners, contractors and engineers to examine the most critical delay factors. A general survey was distributed to examine the correlation between the identified delay factors and the seven most important critical success factors selected. A consensus of expert opinion using the Delphi methodology was used to rank the most needed critical success factors for Brunei building construction. Data was collected and evaluated by statistical methods to identify the most significant causes of delay and to measure the strength and direction of the relationship between critical success factors and delay factors in order to examine project parties’ evaluation of projects’ critical success and delay factors, and to evaluate the influence of critical success factors on critical delay factors. A relative importance index has been used to determine the relative importance of the various causes of delays. A one and two-way analysis of variance (ANOVA) has been used to examine how the group or groups evaluated the influence of the critical success factors in avoiding or preventing each of the delay factors, and which success factors were perceived as most influential in avoiding or preventing critical delay factors. Finally the Delphi method, using consensus from an expert panel, was employed to identify the seven most critical success factors used to avoid the delay factors, and thereby improve project performance.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
The 1990 European Community was taken by surprise, by the urgency of demands from the newly-elected Eastern European governments to become member countries. Those governments were honouring the mass social movement of the streets, the year before, demanding free elections and a liberal economic system associated with “Europe”. The mass movement had actually been accompanied by much activity within institutional politics, in Western Europe, the former “satellite” states, the Soviet Union and the United States, to set up new structures – with German reunification and an expanded EC as the centre-piece. This paper draws on the writer’s doctoral dissertation on mass media in the collapse of the Eastern bloc, focused on the Berlin Wall – documenting both public protests and institutional negotiations. For example the writer as a correspondent in Europe from that time, recounts interventions of the German Chancellor, Helmut Kohl, at a European summit in Paris nine days after the “Wall”, and separate negotiations with the French President, Francois Mitterrand -- on the reunification, and EU monetary union after 1992. Through such processes, the “European idea” would receive fresh impetus, though the EU which eventuated, came with many altered expectations. It is argued here that as a result of the shock of 1989, a “social” Europe can be seen emerging, as a shared experience of daily life -- especially among people born during the last two decades of European consolidation. The paper draws on the author’s major research, in four parts: (1) Field observation from the strategic vantage point of a news correspondent. This includes a treatment of evidence at the time, of the wishes and intentions of the mass public (including the unexpected drive to join the European Community), and those of governments, (e.g. thoughts of a “Tienanmen Square solution” in East Berlin, versus the non-intervention policies of the Soviet leader, Mikhail Gorbachev). (2) A review of coverage of the crisis of 1989 by major news media outlets, treated as a history of the process. (3) As a comparison, and a test of accuracy and analysis; a review of conventional histories of the crisis appearing a decade later.(4) A further review, and test, provided by journalists responsible for the coverage of the time, as reflection on practice – obtained from semi-structured interviews.
Resumo:
The significant challenge faced by government in demonstrating value for money in the delivery of major infrastructure resolves around estimating costs and benefits of alternative modes of procurement. Faced with this challenge, one approach is to focus on a dominant performance outcome visible on the opening day of the asset, as the means to select the procurement approach. In this case, value for money becomes a largely nominal concept and determined by selected procurement mode delivering, or not delivering, the selected performance outcome, and notwithstanding possible under delivery on other desirable performance outcomes, as well as possibly incurring excessive transaction costs. This paper proposes a mind-set change in this particular practice, to an approach in which the analysis commences with the conditions pertaining to the project and proceeds to deploy transaction cost and production cost theory to indicate a procurement approach that can claim superior value for money relative to other competing procurement modes. This approach to delivering value for money in relative terms is developed in a first-order procurement decision making model outlined in this paper. The model developed could be complementary to the Public Sector Comparator (PSC) in terms of cross validation and the model more readily lends itself to public dissemination. As a possible alternative to the PSC, the model could save time and money in preparation of project details to lesser extent than that required in the reference project and may send a stronger signal to the market that may encourage more innovation and competition.
Resumo:
The structures of two polymorphs of the anhydrous cocrystal adduct of bis(quinolinium-2-carboxylate) DL-malic acid, one triclinic the other monoclinic and disordered, have been determined at 200 K. Crystals of the triclinic polymorph 1 have space group P-1, with Z = 1 in a cell with dimensions a = 4.4854(4), b = 9.8914(7), c = 12.4670(8)Å, α = 79.671(5), β = 83.094(6), γ = 88.745(6)deg. Crystals of the monoclinic polymorph 2 have space group P21/c, with Z = 2 in a cell with dimensions a = 13.3640(4), b = 4.4237(12), c = 18.4182(5)Å, β = 100.782(3)deg. Both structures comprise centrosymmetric cyclic hydrogen-bonded quinolinic acid zwitterion dimers [graph set R2/2(10)] and 50% disordered malic acid molecules which lie across crystallographic inversion centres. However, the oxygen atoms of the malic acid carboxylic groups in 2 are 50% rotationally disordered whereas in 1 these are ordered. There are similar primary malic acid carboxyl O-H...quinaldic acid hydrogen-bonding chain interactions in each polymorph, extended into two-dimensional structures but in l this involves centrosymmetric cyclic head-to-head malic acid hydroxyl-carboxyl O-H...O interactions [graph set R2/2(10)] whereas in 2 the links are through single hydroxy-carboxyl hydrogen bonds.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.
Resumo:
As online social spaces continue to grow in importance, the complex relationship between users and the private providers of the platforms continues to raise increasingly difficult questions about legitimacy in online governance. This article examines two issues that go to the core of egitimate governance in online communities: how are rules enforced and punishments imposed, and how should the law support legitimate governance and protect participants from the illegitimate exercise of power? Because the rules of online communities are generally ultimately backed by contractual terms of service, the imposition of punishment for the breach of internal rules exists in a difficult conceptual gap between criminal law and the predominantly compensatory remedies of contractual doctrine. When theorists have addressed the need for the rules of virtual communities to be enforced, a dichotomy has generally emerged between the appropriate role of criminal law for 'real' crimes, and the private, internal resolution of 'virtual' or 'fantasy' crimes. In this structure, the punitive effect of internal measures is downplayed and the harm that can be caused to participants by internal sanctions is systemically undervalued.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.