123 resultados para Time equivalent approach
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
The induction of classification rules from previously unseen examples is one of the most important data mining tasks in science as well as commercial applications. In order to reduce the influence of noise in the data, ensemble learners are often applied. However, most ensemble learners are based on decision tree classifiers which are affected by noise. The Random Prism classifier has recently been proposed as an alternative to the popular Random Forests classifier, which is based on decision trees. Random Prism is based on the Prism family of algorithms, which is more robust to noise. However, like most ensemble classification approaches, Random Prism also does not scale well on large training data. This paper presents a thorough discussion of Random Prism and a recently proposed parallel version of it called Parallel Random Prism. Parallel Random Prism is based on the MapReduce programming paradigm. The paper provides, for the first time, novel theoretical analysis of the proposed technique and in-depth experimental study that show that Parallel Random Prism scales well on a large number of training examples, a large number of data features and a large number of processors. Expressiveness of decision rules that our technique produces makes it a natural choice for Big Data applications where informed decision making increases the user’s trust in the system.
Resumo:
This paper is intended both as a contribution to the conceptual work on process in economic thought and as an attempt to connect a non-institutionalist, non-evolutionary thinker to it. The paper has two principal objectives: (i) to delineate a broad, philosophically grounded conception of what an economic process theory (EPT) is; and (ii) to locate the contributions of George Shackle within this broad conception of EPT. In pursuing these two objectives, I hope to draw out the originality and significance of Shackle’s economics with a particular emphasis on what he adds to process conceptions developed within other heterodox traditions such as institutional and evolutionary economics. I will also highlight some of the perceived limitations of Shackle’s approach and link them to the limitations of process philosophy.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
Accurate and reliable rain rate estimates are important for various hydrometeorological applications. Consequently, rain sensors of different types have been deployed in many regions. In this work, measurements from different instruments, namely, rain gauge, weather radar, and microwave link, are combined for the first time to estimate with greater accuracy the spatial distribution and intensity of rainfall. The objective is to retrieve the rain rate that is consistent with all these measurements while incorporating the uncertainty associated with the different sources of information. Assuming the problem is not strongly nonlinear, a variational approach is implemented and the Gauss–Newton method is used to minimize the cost function containing proper error estimates from all sensors. Furthermore, the method can be flexibly adapted to additional data sources. The proposed approach is tested using data from 14 rain gauges and 14 operational microwave links located in the Zürich area (Switzerland) to correct the prior rain rate provided by the operational radar rain product from the Swiss meteorological service (MeteoSwiss). A cross-validation approach demonstrates the improvement of rain rate estimates when assimilating rain gauge and microwave link information.
Resumo:
There is little consensus on how agriculture will meet future food demands sustainably. Soils and their biota play a crucial role by mediating ecosystem services that support agricultural productivity. However, a multitude of site-specific environmental factors and management practices interact to affect the ability of soil biota to perform vital functions, confounding the interpretation of results from experimental approaches. Insights can be gained through models, which integrate the physiological, biological and ecological mechanisms underpinning soil functions. We present a powerful modelling approach for predicting how agricultural management practices (pesticide applications and tillage) affect soil functioning through earthworm populations. By combining energy budgets and individual-based simulation models, and integrating key behavioural and ecological drivers, we accurately predict population responses to pesticide applications in different climatic conditions. We use the model to analyse the ecological consequences of different weed management practices. Our results demonstrate that an important link between agricultural management (herbicide applications and zero, reduced and conventional tillage) and earthworms is the maintenance of soil organic matter (SOM). We show how zero and reduced tillage practices can increase crop yields while preserving natural ecosystem functions. This demonstrates how management practices which aim to sustain agricultural productivity should account for their effects on earthworm populations, as their proliferation stimulates agricultural productivity. Synthesis and applications. Our results indicate that conventional tillage practices have longer term effects on soil biota than pesticide control, if the pesticide has a short dissipation time. The risk of earthworm populations becoming exposed to toxic pesticides will be reduced under dry soil conditions. Similarly, an increase in soil organic matter could increase the recovery rate of earthworm populations. However, effects are not necessarily additive and the impact of different management practices on earthworms depends on their timing and the prevailing environmental conditions. Our model can be used to determine which combinations of crop management practices and climatic conditions pose least overall risk to earthworm populations. Linking our model mechanistically to crop yield models would aid the optimization of crop management systems by exploring the trade-off between different ecosystem services.
Resumo:
Objectives: This study provides the first large scale analysis of the age at which adolescents in medieval England entered and completed the pubertal growth spurt. This new method has implications for expanding our knowledge of adolescent maturation across different time periods and regions. Methods: In total, 994 adolescent skeletons (10-25 years) from four urban sites in medieval England (AD 900-1550) were analysed for evidence of pubertal stage using new osteological techniques developed from the clinical literature (i.e. hamate hook development, CVM, canine mineralisation, iliac crest ossification, radial fusion). Results: Adolescents began puberty at a similar age to modern children at around 10-12 years, but the onset of menarche in girls was delayed by up to 3 years, occurring around 15 for most in the study sample and 17 years for females living in London. Modern European males usually complete their maturation by 16-18 years; medieval males took longer with the deceleration stage of the growth spurt extending as late as 21 years. Conclusions: This research provides the first attempt to directly assess the age of pubertal development in adolescents during the tenth to seventeenth centuries. Poor diet, infections, and physical exertion may have contributed to delayed development in the medieval adolescents, particularly for those living in the city of London. This study sheds new light on the nature of adolescence in the medieval period, highlighting an extended period of physical and social transition.
Resumo:
Model-based estimates of future uncertainty are generally based on the in-sample fit of the model, as when Box-Jenkins prediction intervals are calculated. However, this approach will generate biased uncertainty estimates in real time when there are data revisions. A simple remedy is suggested, and used to generate more accurate prediction intervals for 25 macroeconomic variables, in line with the theory. A simulation study based on an empirically-estimated model of data revisions for US output growth is used to investigate small-sample properties.
Resumo:
Fluvial redeposition of stone artifacts is a major complicating factor in the interpretation of Lower Palaeolithic open-air archaeological sites. However, the microscopic examination of lithic surfaces may provide valuable background information on the transport history of artifacts, particularly in low energy settings. Replica flint artifacts were therefore abraded in an annular flume and examined with a scanning electron microscope. Results showed that abrasion time, sediment size, and artifact transport mode were very sensitive predictors of microscopic surface abrasion, ridge width, and edge damage (p < 0.000). These results suggest that patterns of micro-abrasion of stone artifacts may enhance understanding of archaeological assemblage formation in fluvial contexts
Resumo:
Determining the internal layout of archaeological structures and their uses has always been challenging, particularly in timber-framed or earthen walled buildings where doorways and divisions are difficult to trace. In temperate conditions however, soil formation processes may hold the key to understanding how buildings were used. The abandoned Roman town of Silchester, UK, provides a perfect case study for testing a new approach combining experimental archaeology and micromorphology. The results show that this technique can resolve previously uncertain features of urban architecture such as the presence of a roof and the changes in internal organisation and use over time.
Resumo:
Collagen-related peptide (CRP) stimulates powerful activation of platelets through the glycoprotein VI (GPVI)-FcR gamma-chain complex. We have combined proteomics and traditional biochemistry approaches to study the proteome of CRP-activated platelets, focusing in detail on tyrosine phosphorylation. In two separate approaches, phosphotyrosine immunoprecipitations followed by 1-D-PAGE, and 2-DE, were used for protein separation. Proteins were identified by MS. By following these approaches, 96 proteins were found to undergo PTM in response to CRP in human platelets, including 11 novel platelet proteins such as Dok-1, SPIN90, osteoclast stimulating factor 1, and beta-Pix. Interestingly, the type I transmembrane protein G6f was found to be specifically phosphorylated on Tyr-281 in response to platelet activation by CRP, providing a docking site for the adapter Grb2. G6f tyrosine phoshporylation was also found to take place in response to collagen, although not in response to the G protein-coupled receptor agonists, thrombin and ADP. Further, we also demonstrate for the first time that Grb2 and its homolog Gads are tyrosine-phosphorylated in CRP-stimulated platelets. This study provides new insights into the mechanism of platelet activation through the GPVI collagen receptor, helping to build the basis for the development of new drug targets for thrombotic disease.
Resumo:
Predicting the evolution of ice sheets requires numerical models able to accurately track the migration of ice sheet continental margins or grounding lines. We introduce a physically based moving point approach for the flow of ice sheets based on the conservation of local masses. This allows the ice sheet margins to be tracked explicitly and the waiting time behaviours to be modelled efficiently. A finite difference moving point scheme is derived and applied in a simplified context (continental radially-symmetrical shallow ice approximation). The scheme, which is inexpensive, is validated by comparing the results with moving-margin exact solutions and steady states. In both cases the scheme is able to track the position of the ice sheet margin with high precision.
Resumo:
Predicting the evolution of ice sheets requires numerical models able to accurately track the migration of ice sheet continental margins or grounding lines. We introduce a physically based moving-point approach for the flow of ice sheets based on the conservation of local masses. This allows the ice sheet margins to be tracked explicitly. Our approach is also well suited to capture waiting-time behaviour efficiently. A finite-difference moving-point scheme is derived and applied in a simplified context (continental radially symmetrical shallow ice approximation). The scheme, which is inexpensive, is verified by comparing the results with steady states obtained from an analytic solution and with exact moving-margin transient solutions. In both cases the scheme is able to track the position of the ice sheet margin with high accuracy.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.