880 resultados para Iterative decoding
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with two examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of one year, a UK Twitter mentions network, constructed from the direct \emph{tweets} between 12.4k individuals during one week, and a subset the Enron email data set.
Resumo:
Radar refractivity retrievals have the potential to accurately capture near-surface humidity fields from the phase change of ground clutter returns. In practice, phase changes are very noisy and the required smoothing will diminish large radial phase change gradients, leading to severe underestimates of large refractivity changes (ΔN). To mitigate this, the mean refractivity change over the field (ΔNfield) must be subtracted prior to smoothing. However, both observations and simulations indicate that highly correlated returns (e.g., when single targets straddle neighboring gates) result in underestimates of ΔNfield when pulse-pair processing is used. This may contribute to reported differences of up to 30 N units between surface observations and retrievals. This effect can be avoided if ΔNfield is estimated using a linear least squares fit to azimuthally averaged phase changes. Nevertheless, subsequent smoothing of the phase changes will still tend to diminish the all-important spatial perturbations in retrieved refractivity relative to ΔNfield; an iterative estimation approach may be required. The uncertainty in the target location within the range gate leads to additional phase noise proportional to ΔN, pulse length, and radar frequency. The use of short pulse lengths is recommended, not only to reduce this noise but to increase both the maximum detectable refractivity change and the number of suitable targets. Retrievals of refractivity fields must allow for large ΔN relative to an earlier reference field. This should be achievable for short pulses at S band, but phase noise due to target motion may prevent this at C band, while at X band even the retrieval of ΔN over shorter periods may at times be impossible.
Resumo:
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.
Resumo:
To date, only one study has investigated educational attainment in poor (reading) comprehenders, providing evidence of poor performance on national UK school tests at age 11 years relative to peers (Cain & Oakhill, 2006). In the present study, we adopted a longitudinal approach, tracking attainment on such tests from 11 years to the end of compulsory schooling in the UK (age 16 years). We aimed to investigate the proposal that educational weaknesses (defined as poor performance on national assessments) might become more pronounced over time, as the curriculum places increasing demands on reading comprehension. Participants comprised 15 poor comprehenders and 15 controls; groups were matched for chronological age, nonverbal reasoning ability and decoding skill. Children were identified at age 9 years using standardised measures of nonverbal reasoning, decoding and reading comprehension. These measures, along with a measure of oral vocabulary knowledge, were repeated at age 11 years. Data on educational attainment were collected from all participants (N = 30) at age 11 and from a subgroup (n = 21) at 16 years. Compared to controls, educational attainment in poor comprehenders was lower at ages 11 and 16 years, an effect that was significant at 11 years. When poor comprehenders were compared to national performance levels, they showed significantly lower performance at both time points. Low educational attainment was not evident for all poor comprehenders. Nonetheless, our findings point to a link between reading comprehension difficulties in mid to late childhood and poor educational outcomes at ages 11 and 16 years. At these ages, pupils in the UK are making key transitions: they move from primary to secondary schools at 11, and out of compulsory schooling at 16.
Resumo:
A method has been developed to estimate Aerosol Optical Depth (AOD), Fine Mode Fraction (FMF) and Single Scattering Albedo (SSA) over land surfaces using simulated Sentinel-3 data. The method uses inversion of a coupled surface/atmosphere radiative transfer model, and includes a general physical model of angular surface reflectance. An iterative process is used to determine the optimum value of the aerosol properties providing the best fit of the corrected reflectance values for a number of view angles and wavelengths with those provided by the physical model. A method of estimating AOD using only angular retrieval has previously been demonstrated on data from the ENVISAT and PROBA-1 satellite instruments, and is extended here to the synergistic spectral and angular sampling of Sentinel-3 and the additional aerosol properties. The method is tested using hyperspectral, multi-angle Compact High Resolution Imaging Spectrometer (CHRIS) images. The values obtained from these CHRIS observations are validated using ground based sun-photometer measurements. Results from 22 image sets using the synergistic retrieval and improved aerosol models show an RMSE of 0.06 in AOD, reduced to 0.03 over vegetated targets.
Resumo:
A method has been developed to estimate aerosol optical depth (AOD) over land surfaces using high spatial resolution, hyperspectral, and multiangle Compact High Resolution Imaging Spectrometer (CHRIS)/Project for On Board Autonomy (PROBA) images. The CHRIS instrument is mounted aboard the PROBA satellite and provides up to 62 bands. The PROBA satellite allows pointing to obtain imagery from five different view angles within a short time interval. The method uses inversion of a coupled surface/atmosphere radiative transfer model and includes a general physical model of angular surface reflectance. An iterative process is used to determine the optimum value providing the best fit of the corrected reflectance values for a number of view angles and wavelengths with those provided by the physical model. This method has previously been demonstrated on data from the Advanced Along-Track Scanning Radiometer and is extended here to the spectral and angular sampling of CHRIS/PROBA. The values obtained from these observations are validated using ground-based sun-photometer measurements. Results from 22 image sets show an rms error of 0.11 in AOD at 550 nm, which is reduced to 0.06 after an automatic screening procedure.
Resumo:
This article reflects on the introduction of ‘matrix management’ arrangements for an Educational Psychology Service (EPS) within a Children’s Service Directorate of a Local Authority (LA). It seeks to demonstrate critical self-awareness, consider relevant literature with a view to bringing insights to processes and outcomes, and offers recommendations regarding the use of matrix management. The report arises from an East Midland’s LA initiative: ALICSE − Advanced Leadership in an Integrated Children’s Service Environment. Through a literature review and personal reflection, the authors consider the following: possible tensions within the development of matrix management arrangements; whether matrix management is a prerequisite within complex organizational systems; and whether competing professional cultures may contribute barriers to creating complementary and collegiate working. The authors briefly consider some research paradigms, notably ethnographic approaches, soft systems methodology, activity theory and appreciative inquiry. These provide an analytic framework for the project and inform this iterative process of collaborative inquiry. Whilst these models help illuminate otherwise hidden processes, none have been implemented following full research methodologies, reflecting the messy reality of local authority working within dynamic organizational structures and shrinking budgets. Nevertheless, this article offers an honest reflection of organizational change within a children’s services environment.
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
The concept of being ‘patient-centric’ is a challenge to many existing healthcare service provision practices. This paper focuses on the issue of referrals, where multiple stakeholders, i.e. general practitioners and patients, are encouraged to make a consensual decision based on patient needs. In this paper, we present an ontology-enabled healthcare service provision, which facilitates both patients and GPs in jointly deciding upon the referral decision. In the healthcare service provision model, we define three types of profile, which represents different stakeholders’ requirements. This model also comprises of a set of healthcare service discovery processes: articulating a service need, matching the need with the healthcare service offerings, and deciding on a best-fit service for acceptance. As a result, the healthcare service provision can carry out coherent analysis using personalised information and iterative processes that deal with requirements change over time.
Resumo:
This paper explores a group of Singaporean English language teachers’ knowledge and beliefs about critical literacy as well as their perspectives on how best to teach literacy and critical literacy in Singapore schools. A face-to-face survey was conducted among 58 English language teachers by using open-ended questions. The survey covered various topics related to literacy instruction including text decoding, meaning construction, and critical analysis of texts. The participating teachers believed strongly that reading and writing are transactional and interactional practices. However, they were less certain in their beliefs about teaching critical literacy including the critical, analytical and evaluative aspects of text reading. Some teachers saw a conflict between using time on teaching critical literacy and preparing students to pass their exams. As critical literacy is not a requirement at exams, they found it difficult to justify using time teaching it. The results suggest that the teachers’ belief systems are strongly influenced by the broad macrostructure of the educational system in Singapore and their own educational experiences.
Resumo:
Organisations typically define and execute their selected strategy by developing and managing a portfolio of projects. The governance of this portfolio has proved to be a major challenge, particularly for large organisations. Executives and managers face even greater pressures when the nature of the strategic landscape is uncertain. This paper explores approaches for dealing with different levels of certainty in business IT projects and provides a contingent governance framework. Historically business IT projects have relied on a structured sequential approach, also referred to as a waterfall method. There is a distinction between the development stages of a solution and the management stages of a project that delivers the solution although these are often integrated in a business IT systems project. Prior research has demonstrated that the level of certainty varies between development projects. There can be uncertainty on what needs to be developed and also on how this solution should be developed. The move to agile development and management reflects a greater level of uncertainty often on both dimensions and this has led the adoption of more iterative approaches. What has been less well researched is the impact of uncertainty on the governance of the change portfolio and the corresponding implications for business executives. This paper poses this research question and proposes a govemance framework to address these aspects. The governance framework has been reviewed in the context of a major anonymous organisation, FinOrg. Findings are reported in this paper with a focus on the need to apply different approaches. In particular, the governance of uncertain business change is contrasted with the management approach for defined IT projects. Practical outputs from the paper include a consideration of some innovative approaches that can be used by executives. It also investigates the role of the business change portfolio group in evaluating and executing the appropriate level of governance. These results lead to recommendations for executives and also proposed further research.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
We develop a method to derive aerosol properties over land surfaces using combined spectral and angular information, such as available from ESA Sentinel-3 mission, to be launched in 2015. A method of estimating aerosol optical depth (AOD) using only angular retrieval has previously been demonstrated on data from the ENVISAT and PROBA-1 satellite instruments, and is extended here to the synergistic spectral and angular sampling of Sentinel-3. The method aims to improve the estimation of AOD, and to explore the estimation of fine mode fraction (FMF) and single scattering albedo (SSA) over land surfaces by inversion of a coupled surface/atmosphere radiative transfer model. The surface model includes a general physical model of angular and spectral surface reflectance. An iterative process is used to determine the optimum value of the aerosol properties providing the best fit of the corrected reflectance values to the physical model. The method is tested using hyperspectral, multi-angle Compact High Resolution Imaging Spectrometer (CHRIS) images. The values obtained from these CHRIS observations are validated using ground-based sun photometer measurements. Results from 22 image sets using the synergistic retrieval and improved aerosol models show an RMSE of 0.06 in AOD, reduced to 0.03 over vegetated targets.