715 resultados para updates


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This papers looks at workers' compensation laws for each state to determine if there have been any updates in the laws since a 2000 survey by Dobie and Megerson. The study also examines what information is available to audiologists testing patients in workers' compensation claims.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reviews the causes of the ongoing crisis in the eurozone and the policies needed to restore stability in financial markets and reassure a bewildered public. Its main message is that the EU will not overcome the crisis until it has a comprehensive and convincing set of policies in place; able to address simultaneously budgetary discipline and the sovereign debt crisis, the banking crisis, adequate liquidity provision by the ECB and dismal growth. The text updates and expands on his Policy Brief contributed in the run-up to the emergency European Council meeting at the end of June.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the ERA-Interim Archive at ECMWF. ERA-Interim is a reanalysis of the global atmosphere covering the data-rich period since 1989, and continuing in real time. As ERA-Interim continues forward in time, updates of the Archive will take place on a monthly basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this document is to provide a single source of reference for every paper published in the journals directly related to research in Construction Management. It is indexed by author and keyword and contains the titles, authors, abstracts and keywords of every article from the following journals: • Building Research and Information (BRI) • Construction Management and Economics (CME) • Engineering, Construction and Architectural Management (ECAM) • Journal of Construction Procurement (JCP) • RICS Research Papers (RICS) The index entries give short forms of the bibliographical citations, rather than page numbers, to enable annual updates to the abstracts. Each annual update will carry cumulative indexes, so that only one index needs to be consulted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this document is to provide a single source of reference for every paper published in the journals directly related to research in Construction Management. It is indexed by author and keyword and contains the titles, authors, abstracts and keywords of every article from the following journals: • Building Research and Information (BRI) • Construction Management and Economics (CME) • Engineering, Construction and Architectural Management (ECAM) • Journal of Construction Procurement (JCP) • Journal of Construction Research (JCR) • Journal of Financial Management in Property and Construction (JFM) • RICS Research Papers (RICS) The index entries give short forms of the bibliographical citations, rather than page numbers, to enable annual updates to the abstracts. Each annual update will carry cumulative indexes, so that only one index needs to be consulted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chess endgame tables should provide efficiently the value and depth of any required position during play. The indexing of an endgame’s positions is crucial to meeting this objective. This paper updates Heinz’ previous review of approaches to indexing and describes the latest approach by the first and third authors. Heinz’ and Nalimov’s endgame tables (EGTs) encompass the en passant rule and have the most compact index schemes to date. Nalimov’s EGTs, to the Distance-to-Mate (DTM) metric, require only 30.6 × 10^9 elements in total for all the 3-to-5-man endgames and are individually more compact than previous tables. His new index scheme has proved itself while generating the tables and in the 1999 World Computer Chess Championship where many of the top programs used the new suite of EGTs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chess endgame tables should provide efficiently the value and depth of any required position during play. The indexing of an endgame’s positions is crucial to meeting this objective. This paper updates Heinz’ previous review of approaches to indexing and describes the latest approach by the first and third authors. Heinz’ and Nalimov’s endgame tables (EGTs) encompass the en passant rule and have the most compact index schemes to date. Nalimov’s EGTs, to the Distance-to-Mate (DTM) metric, require only 30.6 × 109 elements in total for all the 3-to-5-man endgames and are individually more compact than previous tables. His new index scheme has proved itself while generating the tables and in the 1999 World Computer Chess Championship where many of the top programs used the new suite of EGTs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power.In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. RESULTS: We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. CONCLUSION: This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Gram-Schmidt (GS) orthogonalisation procedure has been used to improve the convergence speed of least mean square (LMS) adaptive code-division multiple-access (CDMA) detectors. However, this algorithm updates two sets of parameters, namely the GS transform coefficients and the tap weights, simultaneously. Because of the additional adaptation noise introduced by the former, it is impossible to achieve the same performance as the ideal orthogonalised LMS filter, unlike the result implied in an earlier paper. The authors provide a lower bound on the minimum achievable mean squared error (MSE) as a function of the forgetting factor λ used in finding the GS transform coefficients, and propose a variable-λ algorithm to balance the conflicting requirements of good tracking and low misadjustment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a description of the 1979–2002 tropical Atlantic (TA) SST variability modes coupled to the anomalous West African (WA) rainfall during the monsoon season. The time-evolving SST patterns, with an impact on WA rainfall variability, are analyzed using a new methodology based on maximum covariance analysis. The enhanced Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) dataset, which includes measures over the ocean, gives a complete picture of the interannual WA rainfall patterns for the Sahel dry period. The leading TA SST pattern, related to the Atlantic El Niño, is coupled to anomalous precipitation over the coast of the Gulf of Guinea, which corresponds to the second WA rainfall principal component. The thermodynamics and dynamics involved in the generation, development, and damping of this mode are studied and compared with previous works. The SST mode starts at the Angola/Benguela region and is caused by alongshore wind anomalies. It then propagates westward via Rossby waves and damps because of latent heat flux anomalies and Kelvin wave eastward propagation from an off-equatorial forcing. The second SST mode includes the Mediterranean and the Atlantic Ocean, showing how the Mediterranean SST anomalies are those that are directly associated with the Sahelian rainfall. The global signature of the TA SST patterns is analyzed, adding new insights about the Pacific– Atlantic link in relation to WA rainfall during this period. Also, this global picture suggests that the Mediterranean SST anomalies are a fingerprint of large-scale forcing. This work updates the results given by other authors, whose studies are based on different datasets dating back to the 1950s, including both the wet and the dry Sahel periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – This paper aims to analyze the research productivity and impact of the finalists of the AIB best dissertation award, now titled the Buckley and Casson Award, but from 1987 to 2012 the Farmer Award. Specifically, this paper examines whether there is a relationship between winning the best dissertation award and subsequent publication productivity and impact. Relationships between academic institution and institutional geographic location and finalists are also examined. Design/methodology/approach – The paper examines 25 years of citation counts and the number of publications in Google Scholar of Farmer Award winners and finalists of the AIB best dissertation award from inception in 1987 to 2009, with cited publications as a measure of productivity and citations as a measure of impact. Top performers in productivity and impact are identified, and the averages of winners and non-winners are analyzed in aggregate, over time and per year. Data on finalists' institution and geographic location of institution are analyzed to describe the importance of location and institution to the award. Findings – It is found that the overall average citations of the winners of the award is less than that of the non-winners, and that in the large majority of years the non-winners have an average citation count higher than that of the winners. However, taking averages in five year increments shows more mixed results, with non-winners performing better in two periods and winners performing better in two periods, with the remaining period being split as to research productivity and impact. Originality/value – Aggarwal et al. in this journal summarized a variety of data on Farmer Award finalists from the 1990s to gain insights on institutions represented by finalists, the publication record of finalists, and content of dissertations, among other characteristics. This paper updates some of the insights from that paper by examining data on award winners from 1987 to 2013, and adds further insight by examining for the first time cited publications and citation counts winners and non-winners for the same period excluding the last two years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in hardware and software technologies allow to capture streaming data. The area of Data Stream Mining (DSM) is concerned with the analysis of these vast amounts of data as it is generated in real-time. Data stream classification is one of the most important DSM techniques allowing to classify previously unseen data instances. Different to traditional classifiers for static data, data stream classifiers need to adapt to concept changes (concept drift) in the stream in real-time in order to reflect the most recent concept in the data as accurately as possible. A recent addition to the data stream classifier toolbox is eRules which induces and updates a set of expressive rules that can easily be interpreted by humans. However, like most rule-based data stream classifiers, eRules exhibits a poor computational performance when confronted with continuous attributes. In this work, we propose an approach to deal with continuous data effectively and accurately in rule-based classifiers by using the Gaussian distribution as heuristic for building rule terms on continuous attributes. We show on the example of eRules that incorporating our method for continuous attributes indeed speeds up the real-time rule induction process while maintaining a similar level of accuracy compared with the original eRules classifier. We termed this new version of eRules with our approach G-eRules.